The goal of this project is to make progress on computational problems that elude the most sophisticated computers and Artificial Intelligence approaches but that infants solve seamlessly during their first year of life. To this end we will develop a robot whose sensors and actuators approximate the levels of complexity of human infants. The goal is for this robot to learn and develop autonomously a key set of sensory-motor and communicative skills typical of 1-year-old infants. The project will be grounded in developmental research with human infants, using motion capture and computer vision technology to characterize the statistics of early physical and social interaction. An important goal of this project is to foster the conceptual shifts needed to rigorously think, explore, and formalize intelligent architectures that learn and develop autonomously by interaction with the physical and social worlds. The project may also open new avenues to the computational study of infant development and potentially offer new clues for the understanding of developmental disorders such as autism and Williams syndrome.