Date published: 10/6/2014

How do we get knowledge of the world around us? We take perception for granted because it (usually) works so well. But understanding perception is a deep, fascinating, and challenging effort. My research focuses on visual perception and its connections to thinking and learning. In recent years, these efforts have led to perceptual and adaptive learning technologies that can dramatically accelerate expertise in many domains.

Central to issues of the nature of reality, mind, and knowledge, how we see is one of the oldest questions in philosophy and science.  Today it is a central topic in psychology, cognitive science, computer science, and neuroscience. Visual science involves many levels. My research uses experimental methods (psychophysics) and theoretical ones (computational modeling) to discover the visual processes and mechanisms that enable us to see structure and form in the world – how we perceive objects, the layout of space, and events. We are especially interested in how the visual system perceives coherent and complete objects and surfaces despite gaps in the input. Visual input is almost always fragmented, as when we view a scene through foliage, or when objects or observers move, presenting changing patterns of occlusion. That we perceive complete objects and their shapes from information that is fragmentary across both space and time is one of the miracles of vision, and one that we are coming to understand.

We are also concerned with perceptual learning. Traditionally, it has been thought that learning biases perception, so that we see what we expect or hope to see. Contemporary work shows that the primary effect of experience is to dynamically attune perception to make us better at using information that is actually there. Perceptual learning makes us more selective and automatic at seeing what’s relevant, and we discover previously invisible relations, often abstract ones, that make us better at distinguishing and classifying. These dynamic aspects of perception are relatively domain specific, and they comprise a major basis of almost all human expertise. Our research is showing that this is true even in high level, complex symbolic domains, such as mathematics.

This work has led to perceptual learning technology that accelerates expert pattern recognition. Combined with novel adaptive learning technologies, perceptual-adaptive learning modules (PALMs) produce profound learning gains in mathematics and science learning, aviation training, and in medical learning domains, such as histopathology and echocardiography.

We are grateful for support our laboratory has received in the past few years from the National Science Foundation, US Department of Education, the National Institute of Justice, the US Office of Naval Research, and the National Institutes of Health. For more information on current projects in our research program, please visit the UCLA Human Perception Laboratory at: http://kellmanlab.psych.ucla.edu.

Categories: 
Faculty
Spotlight