With a CAREER award from the National Science Foundation, Dr. Asif Ghazanfar at Princeton University will further develop a primate model system to investigate the neural bases for integrating communication signals across sensory modalities. Previous work from his group and others suggest that many perceptual processes related to social communication by monkeys are similar to the processes exhibited by human infants and adults. Like humans, macaque monkeys produce unique facial expressions when producing different vocal signals and they can also perceptually match the appropriate facial expression with a vocalization. The eye movement patterns that monkeys use to process these "multisensory" social inputs are also similar to those used by human adults and children when they view human faces producing speech.
Building upon these findings, the major aim of this project will be to understand the role that brain areas in the macaque temporal lobe play in integrating faces and voices. Specifically, Dr. Ghazanfar's team will investigate how dynamic facial expressions are integrated with vocal expressions in the auditory cortex and high-level visual cortex. By examining the roles of facial postures and dynamics, eye movements and social experience, they hope to uncover principles of visual-auditory neuronal interactions related to social cognition.
In addition to providing new insights into normal communication processes, this research could also help us to better understand disabling abnormalities in the development of social skills. That is, despite the fact that dysfunctions of the temporal lobe in humans contribute to a variety of debilitating communication disorders, the underlying neural mechanisms remain relatively unexplored by neurobiologists. For example, autistic children fail to develop skills related to social signal processing. The hallmark of autism is an inability to behave in a socially-appropriate manner; people with autism do not process the relevant sensory cues necessary for normal social interactions with other individuals. In both the auditory and visual domains, autistic children have great difficulties interpreting facial and vocal signals and fail to properly integrate the two modalities. This deficit is a specific impairment in face and voice processing, and does not extend to other types of visual or auditory signals. The goals of this research thus have direct relevance to understanding the neurobiology of communication disorders in general and autism in particular.