You are hereHome ›
We study the neural networks that mediate speech perception, especially in noisy environments. Our research focuses on mechanisms that improve intelligibility for both normal listeners and those with hearing loss: acoustic cues, multisensory integration, and top-down inference. Multisensory integration, for instance, merges auditory (voice) and visual (facial movement) signals into a unified experience, enabling faster and more accurate speech recognition. In top-down inference, our brains use context and prior knowledge to resolve ambiguities and repair degraded sounds. We investigate the neural mechanisms of speech perception with techniques including functional magnetic resonance imaging (fMRI), high-density electroencephalography (EEG), and neural network analysis.