Research

Main Content

robotic baby called icub playing with toy

Vision Lab Research

Each of the major research projects at the Vision Lab is designed to answer two fundamental questions. First, how do we learn to use visual perception, memory, and attention to guide our movements and actions during everyday behaviors? Second, what are the underlying neural and computational mechanisms that make visually-guided behavior possible? Our strategy for investigating these questions is to combine the theories and research tools from different disciplines, including (1) human development and learning, (2) neural networks and machine learning, and (3) cognitive neuroscience. Below we highlight a few of the topics that we are currently investigating.

Neural network models of visual attention

In collaboration with Dima Amso (Brown University) and Scott Johnson (UCLA), we are studying the strategies that infants develop as they scan the visual world. The two figures on the right illustrate examples of how one of our models simulates infants' eye movements.

visual search gifrod and box gificub rod and boxbaby rod and box

In our latest work, we are systematically testing our model by using it to control the eye movements of the iCub humanoid robot, and comparing its eye-movement patterns with those produced by human infants.

Neural substrates of occluded-object tracking

We are also studying how the human brain maintains an "internal copy" of an object when it's occluded. One of our strategies is to use neural-imaging techniques such as functional MRI to record the activity of the brain during occluded-object tracking. A major brain area implicated by this work is the parietal lobe, which becomes more active when a moving object is briefly occluded.

visible target gifoccluded targetoccluded tracking fMRI