Crossmodal source identification in speech perception

Lorin Lachs, David B. Pisoni

Research output: Contribution to journalArticlepeer-review

47 Scopus citations

Abstract

Four experiments examined the nature of multisensory speech information. In Experiment 1, participants were asked to match heard voices with dynamic visual-alone video clips of speakers' articulating faces. This cross-modal matching task was used to examine whether vocal source matching can be accomplished across sensory modalities. The results showed that observers could match speaking faces and voices, indicating that information about the speaker was available for cross-modal comparisons. In a series of follow-up experiments, several stimulus manipulations were used to determine some of the critical acoustic and optic patterns necessary for specifying cross-modal source information. The results showed that cross-modal source information was not available in static visual displays of faces and was not contingent on a prominent acoustic cue to vocal identity (f0). Furthermore, cross-modal matching was not possible when the acoustic signal was temporally reversed.

Original languageEnglish (US)
Pages (from-to)159-187
Number of pages29
JournalEcological Psychology
Volume16
Issue number3
DOIs
StatePublished - 2004

ASJC Scopus subject areas

  • Social Psychology
  • Computer Science(all)
  • Ecology, Evolution, Behavior and Systematics
  • Experimental and Cognitive Psychology

Fingerprint Dive into the research topics of 'Crossmodal source identification in speech perception'. Together they form a unique fingerprint.

Cite this