Visual speech primes open-set recognition of spoken words

Adam B. Buchwald, Stephen J. Winters, David Pisoni

Research output: Contribution to journalArticle

11 Citations (Scopus)

Abstract

Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins, 2004). In this paper, we used a cross-modality repetition priming paradigm with visual speech lexical primes and auditory lexical targets to explore the nature of this priming effect. First, we report that participants identified spoken words mixed with noise more accurately when the words were preceded by a visual speech prime of the same word compared with a control condition. Second, analyses of the responses indicated that both correct and incorrect responses were constrained by the visual speech information in the prime. These complementary results suggest that the visual speech primes have an effect on lexical access by increasing the likelihood that words with certain phonetic properties are selected. Third, we found that the cross-modality repetition priming effect was maintained even when visual and auditory signals came from different speakers, and thus different instances of the same lexical item. We discuss implications of these results for current theories of speech perception.

Original languageEnglish
Pages (from-to)580-610
Number of pages31
JournalLanguage and Cognitive Processes
Volume24
Issue number4
DOIs
StatePublished - 2009

Fingerprint

Repetition Priming
Speech Perception
Visual Perception
Phonetics
Noise
Recognition (Psychology)
Spoken Word
Research Personnel
phonetics
paradigm
Research
Modality
Hearing

Keywords

  • Audiovisual priming
  • Lexical access
  • Visual speech
  • Word recognition

ASJC Scopus subject areas

  • Linguistics and Language
  • Experimental and Cognitive Psychology

Cite this

Visual speech primes open-set recognition of spoken words. / Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David.

In: Language and Cognitive Processes, Vol. 24, No. 4, 2009, p. 580-610.

Research output: Contribution to journalArticle

Buchwald, Adam B. ; Winters, Stephen J. ; Pisoni, David. / Visual speech primes open-set recognition of spoken words. In: Language and Cognitive Processes. 2009 ; Vol. 24, No. 4. pp. 580-610.
@article{5d8f3aaf2507455fa554b6c27c1c28ac,
title = "Visual speech primes open-set recognition of spoken words",
abstract = "Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins, 2004). In this paper, we used a cross-modality repetition priming paradigm with visual speech lexical primes and auditory lexical targets to explore the nature of this priming effect. First, we report that participants identified spoken words mixed with noise more accurately when the words were preceded by a visual speech prime of the same word compared with a control condition. Second, analyses of the responses indicated that both correct and incorrect responses were constrained by the visual speech information in the prime. These complementary results suggest that the visual speech primes have an effect on lexical access by increasing the likelihood that words with certain phonetic properties are selected. Third, we found that the cross-modality repetition priming effect was maintained even when visual and auditory signals came from different speakers, and thus different instances of the same lexical item. We discuss implications of these results for current theories of speech perception.",
keywords = "Audiovisual priming, Lexical access, Visual speech, Word recognition",
author = "Buchwald, {Adam B.} and Winters, {Stephen J.} and David Pisoni",
year = "2009",
doi = "10.1080/01690960802536357",
language = "English",
volume = "24",
pages = "580--610",
journal = "Language, Cognition and Neuroscience",
issn = "2327-3798",
publisher = "Taylor and Francis",
number = "4",

}

TY - JOUR

T1 - Visual speech primes open-set recognition of spoken words

AU - Buchwald, Adam B.

AU - Winters, Stephen J.

AU - Pisoni, David

PY - 2009

Y1 - 2009

N2 - Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins, 2004). In this paper, we used a cross-modality repetition priming paradigm with visual speech lexical primes and auditory lexical targets to explore the nature of this priming effect. First, we report that participants identified spoken words mixed with noise more accurately when the words were preceded by a visual speech prime of the same word compared with a control condition. Second, analyses of the responses indicated that both correct and incorrect responses were constrained by the visual speech information in the prime. These complementary results suggest that the visual speech primes have an effect on lexical access by increasing the likelihood that words with certain phonetic properties are selected. Third, we found that the cross-modality repetition priming effect was maintained even when visual and auditory signals came from different speakers, and thus different instances of the same lexical item. We discuss implications of these results for current theories of speech perception.

AB - Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins, 2004). In this paper, we used a cross-modality repetition priming paradigm with visual speech lexical primes and auditory lexical targets to explore the nature of this priming effect. First, we report that participants identified spoken words mixed with noise more accurately when the words were preceded by a visual speech prime of the same word compared with a control condition. Second, analyses of the responses indicated that both correct and incorrect responses were constrained by the visual speech information in the prime. These complementary results suggest that the visual speech primes have an effect on lexical access by increasing the likelihood that words with certain phonetic properties are selected. Third, we found that the cross-modality repetition priming effect was maintained even when visual and auditory signals came from different speakers, and thus different instances of the same lexical item. We discuss implications of these results for current theories of speech perception.

KW - Audiovisual priming

KW - Lexical access

KW - Visual speech

KW - Word recognition

UR - http://www.scopus.com/inward/record.url?scp=70549097068&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=70549097068&partnerID=8YFLogxK

U2 - 10.1080/01690960802536357

DO - 10.1080/01690960802536357

M3 - Article

AN - SCOPUS:70549097068

VL - 24

SP - 580

EP - 610

JO - Language, Cognition and Neuroscience

JF - Language, Cognition and Neuroscience

SN - 2327-3798

IS - 4

ER -