I am currently working on understanding the dynamics underlying auditory scene analysis in humans using EEG. I am also investigating the cortical processing of auditory object boundary and auditory working memory using MEG.
More than half the world's population above the age of 75 years develop age-related hearing loss. They have difficulty understanding speech amidst background noise, like when listening to someone speak in a noisy cafe. Colloquially this is known as the ‘cocktail party problem’ which most animals and humans are able to solve but computers cannot. However, how our brains solve this challenge is not well understood.
I developed a monkey model to understand the brain mechanisms underlying auditory segregation. Currently, I am investigating the brain dynamics in humans. Read more about this project here.
Sounds differ in the duration over which information is conveyed. For instance, phonemes are short in duration while syllables are much longer. Similarly, musical instruments differ in the rate at which sounds change. So the optimal duration of time window that the brain requires to employ for analysis depends on the kind of acoustic feature.
I aim to understand how the primate brain organises the processing of sounds that require time-windows of different duration? Read more about this project here.
A visual object might be easy to define and understand, but objects perceived via audition are also important. A fundamental question in auditory perception is how does the brain detect appearance of a new auditory source in an ongoing auditory scene. For instance, we are able to perceive when a new voice appears at our dinner table without even looking.
I aim to understand the brain dynamics underlying the detection of a new auditory object. Read more about this project here.
Auditory working memory (WM) is the process of keeping representations of auditory objects in mind for short duration even when the sounds are not in the environment. I am investigating non-verbal WM which is different from phonological WM as these sounds cannot be assigned a semantic label.
My MEG project aims to understand the dynamics underlying AWM. What mechanisms underlie neural activity during retention? What is the role of hippocampus in AWM? Read more about this project here.
The brain regions that combines the acoustic cues from both the ears to estimate the spatial position in the soundscape is well known. However, the brain basis underlying auditory spatial motion is not well understood.
An acoustic stimulus with virtual motion was employed in our neuroimaging study to understand auditory spatial motion perception. In this regard, I aimed to validate the percept induced by a virtual motion of an auditory object. Read more about this project here.
Interoception is the sense that helps us understand and feel what is going on inside our body.
It concerns with the ability of the brain to receive, integrate, process, be consciously aware of the physiological signals from the body like heart beat, breathing, etc.
Read more about this project here.
In my PhD in Neuroscience at Newcastle University, I developed "A primate model of human cortical analysis of auditory objects".
Specifically, I explored whether monkeys are a good model of human auditory scene analysis and compared the anatomical organisation of time window processing among primates using non-invasive brain imaging (fMRI) and behaviour. Read more about this work here.
During my Masters in Neuroscience, I investigated the receptor pharmacology of the gamma oscillations induced in the chicken hippocampus using in-vitro electrophysiology.
I also investigated the spatial distribution of spike related slow potentials in the macaque motor cortex through in-vivo neurophysiology. Read more about this work here.
During my undergraduate program in Electronics and Communication Engineering, I worked on array signal processing (smart antennas) in my final year.
I investigated the performance of non-uniform sensor spacing for MUSIC algorithm. I also proposed "Root Propagator" a new algorithm for direction-of-arrival estimation. Read more about this work here.
I worked on loudness compensation which intends to maintain the perceived spectral balance of the audio content irrespective of the playback volume level. The need for this correction arises due to the inherent non-linearity in the human aural perception.
I proposed an efficient approach to accomplish loudness compensation on low-power hand held devices. Read more about this work here.
During my stint in Industry, I worked on optimisation of audio processing software including compression algorithms like encoder, decoder, apart from audio post-processing and acoustics algorithms on various Digital Signal Processors and Micro Controllers. Read about some of this work here.