Past research has been successful in defining how facial expressions of emotion are produced, including which muscle movements create the most commonly seen expressions. These facial expressions of emotion are then interpreted by the visual system, yet little is known about how these facial expressions are recognized. The overarching goal of this proposal is to define the form and dimensions of the cognitive (computational) space used in this visual recognition. Although facial expressions are produced by a complex set of muscle movements, expressions are generally easily identified at different spatial and time resolutions. The first set of experiments will determine how many pixels and milliseconds are needed to successfully identify different emotions. The role of configural features in the processing of expressions of emotion is not well understood, and the second part of the project will identify a number of these configural cues using real images of faces, manipulated versions of these face images, and schematic drawings. The specific features identified in these experiments will then be used to define a shape-based computational model that justifies those results and can also be used to make new predictions that can be verified with additional experiments with human subjects. Identifying which features are used by the cognitive system will help develop protocols for reducing their unwanted effects.