title: Making the invisible visible: Verbal — not visual — cues enhance vis [ Print ] author : tianzc time : 2015-8-5 10:47 title: Making the invisible visible: Verbal — not visual — cues enhance vis PHILADELPHIA — – Cognitive psychologists at the University of Pennsylvania and University of California have shown that an image displayed too quickly to be seen by an observer can be detected if the participant first hears the name of the object.
Through a series of experiments published in the journal PLoS ONE, researchers found that hearing the name of an object improved participants’ ability to see it, even when the object was flashed onscreen in conditions and speeds (50 milliseconds) that would render it invisible. Surprisingly, the effect seemed to be specific to language. A visual preview did not make the invisible target visible. Getting a good look at the object before the experiment did nothing to help participants see it flashed.
The study demonstrated that language can change what we see and can also enhance perceptual sensitivity. Verbal cues can influence even the most elementary visual processing and inform our understanding of how language affects perception.
Researchers led by psychologist Gary Lupyan, assistant professor in the Department of Psychology at Penn, had participants complete an object detection task in which they made an object-presence or -absence decision to briefly presented capital letters.
Other experiments within the study further defined the relationship between auditory cues and identification of visual images. For example, researchers reasoned that if auditory cues help with object detection by encouraging participants to mentally picture the image, then the cuing effect might disappear when the target moved on screen. The study found that verbal cues still clued participants in. No matter what position on screen the target showed up the effect of the auditory cue was not diminished, an advantage over visual cues.
Researchers also found that the magnitude of the cuing effect correlated with each participant’s own estimation of the vividness of their mental imagery. Using a common questionnaire, researchers found that those who consider their mental imagery particularly vivid scored higher when provided an auditory cue.
''The team went on to determine that the auditory cue improved detection only when the cue was correct — that is the target image and the verbal cue had to match. According to researchers, hearing the image labeled evokes an image of the object, strengthening its visual representation and thus making it visible.
“This research speaks to the idea that perception is shaped moment-by-moment by language,” said Lupyan. “Although only English speakers were tested, the results suggest that because words in different languages pick out different things in the environment, learning different languages can shape perception in subtle, but pervasive ways.”
The single study is part of a greater effort by Lupyan and other Penn psychologists to understand how high-level cognitive expectation can influence low-level sensory processing, in this case verbal cues. For years, cognitive psychologists have known that directing participant’s attention to a general location improves reaction times to target objects appearing in that location. More recently, experimental evidence has shown that semantic information can influence what one sees in surprising ways. For instance, hearing words that associate with directions of motion, such as a falling “bomb,” can interfere with an observer’s ability to quickly recognize the next movement they see. Moreover, hearing a word that labels a target improves the speed and efficiency of the search. For instance, when searching for the number 2 among 5′s, participants are faster to find the target when they actually hear “find the two” immediately prior to the search — even when 2 has been the target all along.
</p>The study was conducted by Lupyan of Penn’s Department of Psychology and Michael Spivey of the University of California, Merced.
Research was conducted with funding from the National Science Foundation.