New research published in The Journal of Neuroscience reveals that basic sound processing remains active in the brainstem during non-rapid eye movement sleep, but weakens in the auditory cortex as sleep deepens. The study sheds light on how the sleeping brain manages to preserve rest while staying responsive to important sounds.
Sleep is composed of different stages, each marked by specific patterns of brain activity. The non-rapid eye movement stages—N1, N2, and N3—progress from light to deep sleep. During N3, also known as slow-wave sleep, the brain exhibits large, slow waves and becomes less responsive to the outside world.
In contrast, rapid eye movement (REM) sleep involves more active brain waves and dreaming, but it was not the focus of the current study. Scientists have long known that certain brain responses to sound decrease during deeper sleep, but how these changes unfold across different levels of the auditory system has remained unclear.
To investigate this, researchers Hugo R. Jourde and Emily B. Coffey recorded brain activity from healthy adults during sleep to examine how early auditory signals are processed at different depths of sleep. Their goal was to understand whether the weakening of auditory responses is a uniform process throughout the brain, or whether it affects higher and lower parts of the auditory system differently.
The researchers focused on a specific type of auditory response called the frequency-following response, or FFR. This response reflects how accurately neurons track the pitch of a sound and is known to originate in multiple parts of the brain, including the brainstem, thalamus, and auditory cortex.
“The frequency-following response is an interesting signal because it gives us fairly direct insights into how the brain represents pitch information, which is really important in human communication – as well as to create and appreciate music,” said Coffey, an associate professor of psychology at Concordia University.
“Neurons in the auditory system phase-lock their firing patterns to each wave in an acoustic signal, and their aggregated responses create an electrical field that we can measure at the scalp using electroencephalography. What’s particularly interesting is that individuals’ brains have quite different representations even of such a fundamental sound property. These representations are related to perception, and they change with experience and age. That means we’re actually hearing the world a bit differently from one another.”
“I first started using the FFR to look at how people differ in how they encode and perceive sound, and how this changes with musical training. Since then, our group has been looking at where, how, and why the FFR changes—either temporarily due to attention or more permanently due to training.”
To isolate where the FFR signals were coming from, the team used magnetoencephalography (MEG), a brain imaging method that captures magnetic signals from neuronal activity with high spatial resolution, along with electroencephalography (EEG), which records electrical activity from the scalp. By comparing these signals across different sleep stages, they could pinpoint how each brain region responded to sound as sleep deepened.
The study involved 14 young adults who underwent a 2.5-hour nap session while lying in a MEG scanner. All participants had normal hearing and regular sleep patterns. During the nap, they listened to a repeated speech sound—specifically, a synthesized syllable “da”—delivered at a quiet but clearly audible level. This particular sound is often used in auditory research because it reliably produces strong FFRs and mimics natural speech frequencies. The researchers presented the sound continuously at a rate of about five times per second while simultaneously recording brain activity.
The EEG recordings allowed the researchers to determine when each participant was awake or in a specific sleep stage. In addition, they tracked brief events in the brain’s sleep architecture, such as sleep spindles and slow oscillations. These phenomena are thought to help stabilize sleep and possibly gate incoming sensory information, but their exact role in auditory processing is still debated.
The key finding was that the strength of the FFR stayed stable in subcortical regions—the cochlear nucleus, inferior colliculus, and medial geniculate body—even during deep sleep (stage N3). This suggests that the brainstem and thalamus continue to process pitch information throughout non-REM sleep. In contrast, FFR strength in the auditory cortex decreased as sleep became deeper. The drop in signal strength was most pronounced during N3 sleep and appeared to grow progressively with sleep depth.
This pattern was confirmed using both the global EEG signal and the localized MEG signal from the right auditory cortex. Interestingly, the latency of the cortical FFR also increased slightly in deeper sleep, indicating slower processing of sound in the cortex. However, the timing of subcortical signals did not appear to change.
To better understand why cortical sound processing was reduced, the researchers examined how much the thalamus and cortex communicated during sleep. They measured a form of functional connectivity called imaginary coherence, which reflects the degree to which two regions synchronize their activity. During N3 sleep, connectivity between the auditory thalamus and auditory cortex decreased, especially during the middle portion of the FFR signal. This reduced coordination between regions may be a factor in the weakened cortical responses observed in deeper sleep.
“This study shows that our brains process sounds differently depending on how deeply you’re sleeping, and surprisingly, different parts of your auditory system behave in opposite ways,” Coffey told PsyPost. “Your brainstem keeps working normally, but your cortex seems to ‘turn down the volume’—cortical connectivity between thalamus and cortex becomes weaker, and the amplitude of the FFR measured at the cortex diminishes.”
“In sleep, our brain is actively balancing two competing needs: staying alert enough to wake up for truly important sounds (like a baby crying or a smoke alarm), while staying asleep to accomplish the necessary roles of sleep such as restoring the body and brain and memory consolidation. This seems to be part of that mechanism. The study found considerable variation between people in how strongly sleep attenuates their representation of sounds, which also might explain why people differ in their sensitivity to nighttime noises. It also hints that because sound does still continue to be processed to some level, maintaining a quiet sleeping environment is probably a good idea for good sleep quality.”
One unexpected result was that sleep spindles—bursts of activity that originate in the thalamus and travel to the cortex—did not seem to interfere with sound encoding. Earlier studies had proposed that these events might act like a “gate,” preventing sensory information from reaching the cortex. The team expected to see a dip in FFR strength when sounds occurred during spindles, but no such effect emerged.
“The pattern of results shows a clear distinction between thalamus and cortex,” Coffey explained. “It could have been possible to have a gradual decrease in amplitude as you go up the auditory hierarchy, or conversely, the pitch representation might have been preserved up to and including the cortex, with sleep only affecting higher-level, later auditory processing like those involved in extracting meaning from language and music streams.”
“Despite the apparent thalamocortical disconnection, it did not seem to be the case that neural events known as thalamocortical sleep spindles were responsible for preventing sound representations from reaching the cortex. Instead, it was the overall sleep depth that made a difference.”
“This was surprising because previous work had suggested spindles themselves were directly involved in blocking sound to cortex, and it would have made sense since spindles involve very peculiar, strong firing patterns in thalamic neurons, which is quite different from how they normally work,” Coffey said. “We found, however, that sound presented during spindles—as compared with periods during a similar sleep stage in which no spindles occurred—produced similar brain responses, meaning that the auditory pathways up to cortex are still working even during these intense neural events.”
But there are some limitations to consider. Because the study was based on nap recordings rather than full overnight sleep, the researchers were unable to examine REM sleep in detail. REM typically occurs later in the night and was rarely observed in the nap sessions. In addition, while the study design allowed for precise measurements of pitch tracking, it did not address other aspects of auditory processing such as speech comprehension or sound localization.
“This is a descriptive study on neurophysiology,” Coffey noted. “It describes what happens but doesn’t answer whether this reduction in sound processing serves a specific biological purpose or is just a side effect of other neural processes that happen in sleep—the interpretations about how this might be a mechanism to protect the sleep state remain speculative. There are, of course, many limitations, as no single study design can look at everything, meaning that much is left to be done, including replication in a larger sample.”
Despite these limitations, the findings highlight a distinct split in how different parts of the brain process sound during sleep. The brainstem continues to track pitch with high fidelity even in deep sleep, while the cortex gradually tunes out. The study adds to a growing body of work showing that the sleeping brain is not uniformly inactive but instead regulates information flow in selective ways.
The researchers hope to build on this work by exploring how sounds can be used to influence brain activity during sleep.
“We’re interested both in how sleep affects sound processing and how sound affects the other types of cognitive processes that go on during sleep,” Coffey said. “Specifically, we’re interested in how sound can be used as a means of modulating memory consolidation.”
“By either disrupting or enhancing processes via acoustic stimulation that is precisely timed to specific neural events (i.e., closed-loop auditory stimulation), or by associating new information with auditory cues and playing them during sleep (i.e., targeted memory reactivation), we can learn a lot about how memory works and perhaps even restore sleep’s memory processes in people who have poor sleep and memory. But first, it’s necessary to understand how sleep and sound interact bidirectionally. This paper is part of that fundamental work.”
The study, “Sleep state influences early sound encoding at cortical but not subcortical levels,” was published July 7, 2025.