This award is funded under the American Recovery and Reinvestment Act of 2009 (Public Law 111-5). It is said that "seeing is believing," and we take it for granted that vision operates efficiently and accurately. This suggests that vision is easy. However, failed attempts at producing computer vision demonstrate exactly the opposite--vision is perhaps the most difficult operation performed by the brain, requiring one third of the neocortex. The NSF-funded research project being conducted by David Huber at the University of California, San Diego and Richard Shiffrin at Indiana University focuses on an important question in visual perception: How is it that we can keep separate what we are currently viewing from that which came immediately before? In truth, vision is constantly "blurring" together information over time, such as when viewing the smooth motion at the cinema that is produced by a sequence of still images shown in rapid succession. However, while reading, our eyes constantly move from one word to the next, and yet unlike a movie, we see each word separately and do not confuse it with the previous words. To accomplish this, the brain must have a trick for deciding when the previous image should be combined with the next image and when each should be kept separate. Huber and Shiffrin hypothesize that the process of identifying each word or each movie image causes it to be suppressed so as to reduce inappropriate blending with the next word or image. In the case of a movie, the images appear too briefly, and the blending produces apparent movement. In the case of reading, our eyes dwell on each word exactly the right amount of time to fully identify and suppress each word so as to reduce confusion with the next word. Huber and Shiffrin investigate this ability to separate visual images in a variety of tasks, including reading, face identification, and rapid detection of change, to name just a few examples. If their hypothesis is correct, manipulating the timing of stimuli should produce analogous behavioral effects in all of these situations. Beyond laboratory studies, this hypothesis may also improve computer vision systems in situations requiring rapid identification. For instance, computer controlled cameras at the airport might be used to identify faces of suspects, but this requires separating one face from another when there is a crowd of faces moving quickly past the camera. The results of this research may also be relevant to disorders such as autism, schizophrenia, and dyslexia, which often involve a component of distorted or abnormal perception. For instance, one account of dyslexia suggests that reading difficulties arise from an inappropriate blending of letters and words. Understanding the manner in which the brain separates visual information over time may help with the diagnosis, interpretation, and treatment of these perceptual deficits. The human perceptual system receives a constant stream of continually changing information. For example, the eyes move several times each second, providing different views of different objects or words. This project investigates the dynamic process of separating in time and space information pertaining to previous sources (e.g., a previously viewed word) from information pertaining to the current source (e.g., the currently viewed word). Behavioral studies will address the process of discounting that serves to reduce perceptual separation errors due to source confusion. This discounting process can be understood at multiple levels of description and the proposed experiments test complimentary and related mathematical models at the causal and neural levels of analysis. Two causal models use Bayesian statistical techniques and focus on optimizing perception in a noisy world perceived with a limited capacity processing system; discounting is implemented as "explaining away" between competing sources. The neural model implements discounting through habituation that arises with the transient depletion of synaptic resources. In combination, these models demonstrate why perceptual discounting exists and the particular manner in which it is implemented. A wide variety of experimental paradigms involve the rapid presentation of visual objects and the proposed studies use these models to investigate whether perceptual source confusion and discounting may provide a unified account of these phenomena. Besides visual short-term priming with words, the proposed studies examine the popular perceptual and cognitive paradigms of repetition blindness, flanker effects, the attentional blink, negative priming, semantic satiation, and affective priming. All of these paradigms involve presenting a picture, word, or symbol on a computer screen followed by a second presentation that is either identical, positively related, or negatively related to the first presentation. An important goal of this endeavor is to provide a unified account of these perceptual phenomena that are currently considered in isolation by researchers.