[FLYER]


Bioacoustic data science aims at analyzing and modeling animal sounds for neuroethology / biodiversity assessment. However, given the complexity of the collected data along with the different taxonomies of the different species and their environmental contexts, it requires original approaches. In recent years, the field of bioacoustics has received increasing attention due to its diverse potential benefits to science and society, and is steadily required by regulatory agencies as a tool for timely monitoring and mitigation of environmental impacts from human activities. The increased expectations from bioacoustic research have been coincident with a dramatic increase in the spatial, temporal and spectral scales of acoustic data collection efforts. One of the most promising strategies concerns neural information processing and advanced machine learning.

The features and biological significance of animal sounds, while constrained by the physics of sound production and propagation, have evolved through the processes of natural selection. Additional insights have been gained through analysis and attempts of modeling of animal sounds as related to critical life functions (e.g. communicating, mating, migrating, navigating, etc.), social context, and individual, species and population identification. These observations have led to both quantitative and qualitative advancements, as for example MRIs for monitoring bird song ontogeny. These yieled to new paradigms such as prococesses that underlie song learning and their modelisation. Although, the majority of the existing applications lend themselves to widely used, advanced acoustic signal processing methodologies, the field has yet to successfully integrate robust signal processing and machine learning algorithms, applied for example to bird, insect, or whale song identification, source localisation, (neural)modelisation of the biosonar of bats or dolphins...

Sperm whale tracking demo

Figure: Sperm whale tracking demo, more informations here.

This NIPS4B workshop will help to introduce and solidify an innovative computational framework in the field of bioacoustics by focusing on the principles of neural information processing in an inheretly hierarchical manner. State of the art machine learning algorithms will be explored in order to draw physiological parallels within bioacoustics, while an applicative framework will address classification tasks.

For example, new sparse feature representations will be pursued by using both shallow and deep architectures in order to model the underlying highly complex data distribution. Cost creation and hyper-parameter optimization in architectures such as Deep Belief Networks (DBN), Sparse Auto Encoders (SAE), Convolutional Neural Networks (ConvNet), Scattering transforms, ..., will provide insights in the analysis of these complex signals. Any interesting new learning technique for this type of bioacoustic signal is very welcome.

NIPS4B will encourage interdisciplinary, scientific exchanges and foster collaborations among the workshop participants for the bioacoustic signal analysis and understanding of the auditory process. NIP4B aims at bringing together experts from the machine learning and computational auditory scene analysis fields with experts in the field of animal acoustic communication systems to promote, discuss and explore the use of machine learning techniques in bioacoustics for signal separation, classification, localisation,... It will concern researchers in modeling the auditory cortex, neurophysiological process in perception and learning, machine listening, signal processing, and computer science to discuss these complementary perspectives on bioacoustics.


NIPS4B is supported by MASTODONS SABIOD CNRS project.

nb: Relative event= the IEEE ATSIP conf. / special session on bioacoustics (deadline 5th oct).