Research InterestsHow do we extract information from the sounds in speech? The process by which we generate meaning from small changes in air pressure is one of the most complex and uniquely human aspects of our nature. (Just ask Siri!). The focuses of my PhD research – using primarily MEG and ECoG – are
- How do we align our brain activity with sounds around us so that we are able to track and make sense of individual sources?
- How does the interaction between strictly auditory brain regions and sensory inputs affect interactions with other regions in order to generate comprehension?
- How does this process change with the processing of different types of sounds (i.e. speech vs music)?
These are questions which are likely to motivate me throughout my entire career as a researcher. Hopefully, in my PhD I can begin to make some headway into understanding these issues and then using what I find as a model for processing not only sounds but all types of interactions with the environment.
I am a PhD candidate studying with Dr. Pesaran in the Center for Neural Science as well as Dr. David Poeppel in the Psychology Department. In my undergraduate education, studying Cognitive Neuroscience and Music at Harvard University, I conducted research on the role of rhythm perception and production in dyslexia and linguistic ability. After graduating, I worked in Dr. Poeppel's lab, studying interactions between neural oscillations in the brain and rhythms in speech and music and demonstrating the necessity of such interactions for speech comprehension. As a current PhD candidate, I am now working on interactions between sensory and motor systems during speech perception. The role for such interactions in this context is an intensely debated topic and must be settled in order to understand the nature of speech representations in the brain and, further, how those representations make contact with meaning. We are beginning our research with both psychophysics and electrocorticography (ECoG) to gauge the influence of motor planning on sensory predictions of incoming speech.