Auditory Scene Analysis

More than half the world's population above the age of 75 years develop age-related hearing loss. They have difficulty understanding speech amidst background noise, like when listening to someone speak in a noisy cafe. Colloquially this is known as the ‘cocktail party problem’ which most animals and humans are able to solve but computers cannot. However, how our brains solve this challenge is not well understood.

Monkey model

I explored whether monkeys are a good model of human brain mechanisms underlying auditory segregation. Unlike in humans, the use of monkeys allows systematic invasive brain recordings to characterise how single neurons achieve this feat. However, before one can record from a monkey brain and generalise the results to humans it is essential to show that the underlying mechanisms are similar in both species.

Here is a visual summary of this project.

I employed synthetic auditory stimuli over speech as they do not have semantic confounds and help us to develop animal models. Our behavioural experiments showed that rhesus macaques are able to perform auditory segregation based on the simultaneous onset of spectral elements (temporal coherence). I conducted functional magnetic resonance imaging (fMRI) in awake behaving macaques to show that the underlying brain network is similar to that seen in humans. My study is the first investigation to show such evidence in any animal model.

Relevant publication:

Here is my 3 minute video explaining this project

Here is my poster summarising this project

Role of Attention

What is the role of attention in auditory segregation? Is attention necessary for segregation to occur? I employ electroencephalography (EEG) technique in humans with normal hearing to address these questions using Speech-In-Noise (SIN) as well as Stochastic Figure-Ground (SFG) stimuli.

To elicit the role of top-down attention on auditory segregation, I manipulated attention between relevant (auditory) and irrelevant (visual) modalities. The auditory task was to detect absence of an auditory object within two kinds of acoustic stimuli i.e. detect absence of either "Figure" in SFG or Speech in SIN stimuli. The irrelevant visual task was to detect absence of coherent motion of dots within Variable Coherence Random Dot Motion (VCRDM) stimulus.

Here is demonstration of the visual stimulus that employed Variable Coherence Random Dot Motion (VCRDM).

Courtesy: https://codepen.io/vrsivananda/pen/MVXXOZ

Here is a visual summary of this project.

I observed a significant difference between auditory object and background scene i.e. figure vs ground and speech vs noise, in active condition when subjects paid attention to sounds but I did not find a significant difference in distracted condition where subjects paid attention to images.

So I conclude that attention aids in the separation of overlapping sounds. However if attention is directed elsewhere to a demanding task thus depleting computational resources then automatic segregation of the auditory scene is compromised.