Visual speech primes open-set recognition of spoken words

Adam B. Buchwald, Stephen J. Winters, David B. Pisoni

Research output: Contribution to journalArticle

11 Scopus citations

Abstract

Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins, 2004). In this paper, we used a cross-modality repetition priming paradigm with visual speech lexical primes and auditory lexical targets to explore the nature of this priming effect. First, we report that participants identified spoken words mixed with noise more accurately when the words were preceded by a visual speech prime of the same word compared with a control condition. Second, analyses of the responses indicated that both correct and incorrect responses were constrained by the visual speech information in the prime. These complementary results suggest that the visual speech primes have an effect on lexical access by increasing the likelihood that words with certain phonetic properties are selected. Third, we found that the cross-modality repetition priming effect was maintained even when visual and auditory signals came from different speakers, and thus different instances of the same lexical item. We discuss implications of these results for current theories of speech perception.

Original languageEnglish (US)
Pages (from-to)580-610
Number of pages31
JournalLanguage and Cognitive Processes
Volume24
Issue number4
DOIs
StatePublished - Dec 1 2009

    Fingerprint

Keywords

  • Audiovisual priming
  • Lexical access
  • Visual speech
  • Word recognition

ASJC Scopus subject areas

  • Linguistics and Language
  • Experimental and Cognitive Psychology

Cite this