Visual perception and cognitive neuroscience are the cornerstones of our research in the VPEC Lab. We employ psychophysics and modeling to understand how basic visual processes allow people to see and understand both simple and complex patterns like shapes, facial expressions and gaze. Our goal is to use vision science to answer core questions about the human mind and the nature of visual awareness while making an impact on multiple disciplines within psychology.
Areas of Research
Billions of bits of information arrive at the retina every moment, but only a fraction of this information reaches awareness. That isn't to say, however, that unseen information is lost or unimportant. Many visual processes continue even without awareness, influencing behavior immediately, or even days later.
Which visual processes require awareness and which do not? How do different types of visual masking work, and what do their similarities and differences tell us about general mechanisms of visual awareness, if there are any? Determining the perceptual processes that do and do not require awareness may reveal the purpose of conscious vision.
Perceptual Bias and Anxiety
In our newest line of research, we collaborate with Michelle Rozenman, PhD, in DU's Clinical Child Psychology program to examine basic cognitive and visual mechanisms of anxiety in childhood and adulthood. We target a mechanism of anxiety known as interpretation bias in which people appraise ambiguous information as being threatening.
Recent work suggests that, compared to unaffected controls, anxious adults and children are more likely to appraise ambiguous information as threatening, although in children this work is limited to interpretations of linguistic information. Our own work in the VPEC Lab illustrates that a similar negative bias occurs when non-anxious adults interpret visual information, like facial expressions. Yet visual bias among anxious adults and children and the link between visual and linguistic cognitive processes across development remain uncharted territory.
In our collaboration, we aim to bridge these gaps and comprehensively examine interpretation bias in linguistic and visual processing across development with anxious and non-anxious youth and adults.
Perceptual Integration/Ensemble Perception
How do people see the "gist?" Groups of objects and crowds of people are nearly everywhere. And with only a glance, people can appreciate their collective properties, developing answers to questions like "How happy is the crowd, overall?" or "Where is that flock of geese headed?"
But there is a paradox about this kind of perception. Attention and memory have limited capacity, only allowing people to see and remember information about a few things at a time. How does the visual system overcome these bottlenecks to see the gist of a scene? What mechanisms integrate and summarize lots of visual information, all at once, allowing people to appreciate groups of things as collective entities?
Autism Spectrum Disorder and Visual Perception
Autism spectrum disorder (ASD) is increasingly prevalent, affecting 1 in every 68 children in the United States. In addition to presenting children with pervasive social and behavioral challenges, ASD is also associated with changes in visual perception. Do children with ASD actually see their worlds differently than other children? How do children with ASD perceive complex social cues, like a person's gaze direction, and how might visual perception influence the way kids with ASD understand another person's mental state or engage in social interactions? How do people with ASD perceive gaze on a robot's face?
With funding from the NICHD, we collaborate with Mohammad Mahoor, PhD, in the Ritchie School of Engineering and Computer Science at the University of Denver to examine these questions.
Faces are fascinating. They're visually complex, they're dynamic and they're just about everywhere. People contort their faces to express their emotions and direct their attention. Faces also provide a means to understand what other people are thinking, feeling and seeing.
While most people have no trouble discriminating extreme facial displays, like seething rage or direct gaze, the facial cues people typically encounter are more subtle and visible for only an instant. How much about a person's internal state can we actually access in an instant? How does the visual system determine where a person is looking? Does hearing emotional sounds change the way people see emotion?
The world is full of redundancy. Many objects and people appear strikingly similar. Yet somehow people understand our world in terms of distinct categories: tall or flat, happy or neutral, walking toward me or to the side, familiar or unfamiliar.
This segmentation is really important when viewing many things at once, allowing people find the odd-ball, or the object of a search. What visual mechanisms allow people to parse lots of similar information at once, segmenting the world into distinct categories and directing attention toward the qualities that make each object or person unique?
What is visual experience like in childhood? Many perceptual abilities take years to develop, including distinguishing objects in time and space, filtering out distracting visual information and even remembering what has been seen.
Yet, just like adults, children must navigate crowded and cluttered environments. Is the developing visual system equipped with any mechanisms for seeing many things at once? How does the ability to see the gist develop from pre-school age to adulthood? How do children perceive complex social cues like gaze and facial expression?