Our brain needs to perceive the world around us by hearing and recreating it in our mind. This requires that our brain has a way of identifying any new sound as it appears, separating it from other sounds that are present and finally representing it as a distinct object in our mind. An important question in understanding how the brain recreates an 'auditory scene' is therefore: how does the brain detects appearance of new sounds?
Since sounds may contain multiple components that vary in time and frequency, this process requires that our brains detect changes in the statistics of the time-frequency space. However, how the identification of discontinuities at object boundary is accomplished is not well understood.
Here is a visual summary of this project which aims to answer this question
To understand how the emergence of a new sound in an ongoing acoustic scene is detected, I use the MEG (magnetoencephalography) technique, which non-invasively records the magnetic activity of the brain. I employ artificially created sounds- for these I have intentionally formed boundaries by changing the underlying regularity in time-frequency space. I recorded MEG in volunteers who are asked to report any change in the sound structure that they can detect, as they listen to these artificial sounds . All subjects were able to detect and report these changes very well.
I observed a very low frequency drift (orange trace) in the magnetic response of their brain recorded from their scalp. I estimated the location of the activity in the brain that can evoke such a response at the scalp. This activity seems to be located in a particular region (Primary Auditory Cortex), which is known to be the first cortical region to receive the auditory input from the ears. In a previous study that used fMRI (a technique to non-invasively record the metabolic demand created by brain activity), this same region of the brain was shown to be involved in this same task!
There is an emerging school of thought that our brains are not just passively reacting to the world around us but constantly predicting it. This "predictive-coding" idea suggest that our brains accomplish this by creating a model of the world around us and predicting how it will change. Next our brains compare this prediction with the actual sensations received to update their model of the world and make further predictions. This process of predict-compare-update is a continuous ongoing process. During this process of model update, temporally regular sensory inputs have higher relevance than temporally irregular sensations. So in this framework, 'precision' signal, a long-term second order statistic, represents the level of regularity of sensory inputs. For instance, higher precision implies that the corresponding sound source is very regular so it is assigned a higher weight in the process of prediction and vice versa. In our case, this drift signal (shown in orange) encodes the level of regularity of sound structures in time-frequency space.
So I conclude that Primary Auditory Cortex in human brain detects the appearance of new sounds as they emerge in the acoustic environment by continually monitoring the regularity of time-frequency space.
Here is my talk summarising the findings of this study
Here is a poster summarising this study