A primate model of human cortical analysis of auditory objects
Here is a brief summary of my doctoral work as two projects
Monkey model of human auditory scene analysis
More than half the world's population above the age of 75 years develop age-related hearing loss. They have difficulty understanding speech amidst background noise, like when listening to someone speak in a noisy cafe. Colloquially this is known as the ‘cocktail party problem’ which most animals are able to solve but computers cannot. However, how our brains solve this challenge is not well understood.
I explored whether monkeys are a good model of human brain mechanisms underlying auditory segregation. Unlike in humans, the use of monkeys allows systematic invasive brain recordings to characterise how single neurons achieve this feat. However, before one can record from a monkey brain and generalize the results to humans it is essential to show that the underlying mechanisms are similar in both species.
I employed synthetic auditory stimuli over speech as they do not have semantic confounds and help us to develop animal models. Our behavioural experiments showed that rhesus macaques are able to perform auditory segregation based on the simultaneous onset of spectral elements. I conducted functional magnetic resonance imaging (fMRI) in awake behaving macaques to show that the underlying brain network is similar to that seen in humans. My study is the first and the only investigation to show such evidence in any animal (Schneider, et al., 2018).
Reference: Felix Schneider*, Pradeep D*, Fabien Balezeau, Michael Ortiz-Rios, Yukiko Kikuchi, Christopher I. Petkov, Alexander Thiele, and Timothy D. Griffiths. "Auditory figure-ground analysis in rostral belt and parabelt of the macaque monkey." Scientific reports 8, no. 1 (2018): 1-8. (* - equal first authors)
Anatomical organisation of time window processing in primate auditory cortex
Time window of analysis is particularly relevant for processing of all animal vocalisations. It can be characterized as the time duration over which acoustic information is processed. I examined the brain basis for the processing of auditory time windows in stimuli with similar spectro-temporal complexity to human speech or monkey vocalisations.
I created a synthetic stimulus by manipulating spectral flux, a timbral dimension, to systematically vary the time window duration required to analyse it. I conducted functional magnetic resonance imaging in awake rhesus macaques using this stimuli to test how the anatomy of their response patterns of time window processing compares to humans.
I found that anatomical organization of processing of time windows in monkeys is similar to humans. However, monkeys exhibit a decreased sensitivity to longer time windows as compared to humans. This difference in sensitivity between humans and monkeys is surprising given their phylogenetic proximity. This increase in sensitivity for long time windows in humans might be due to the specialization of the human brain for processing of speech which requires greater sensitivity to longer time windows. My study highlights the brain mechanisms that might be unique to humans, possibly an outcome of divergent evolution alongside the development of speech and language.