Cross-Modal Source Information and Spoken Word Recognition

Lorin Lachs, David B. Pisoni

Research output: Contribution to journalArticle

19 Scopus citations

Abstract

In a cross-modal matching task, participants were asked to match visual and auditory displays of speech based on the identity of the speaker. The present investigation used this task with acoustically transformed speech to examine the properties of sound that can convey cross-modal information. Word recognition performance was also measured under the same transformations. The authors found that cross-modal matching was only possible under transformations that preserved the relative spectral and temporal patterns of formant frequencies. In addition, cross-modal matching was only possible under the same conditions that yielded robust word recognition performance. The results are consistent with the hypothesis that acoustic and optical displays of speech simultaneously carry articulatory information about both the underlying linguistic message and indexical properties of the talker.

Original languageEnglish (US)
Pages (from-to)378-396
Number of pages19
JournalJournal of Experimental Psychology: Human Perception and Performance
Volume30
Issue number2
DOIs
StatePublished - Apr 2004

ASJC Scopus subject areas

  • Experimental and Cognitive Psychology
  • Arts and Humanities (miscellaneous)
  • Behavioral Neuroscience

Fingerprint Dive into the research topics of 'Cross-Modal Source Information and Spoken Word Recognition'. Together they form a unique fingerprint.

  • Cite this