Crossmodal source identification in speech perception

Lorin Lachs, David Pisoni

Research output: Contribution to journalArticle

45 Citations (Scopus)

Abstract

Four experiments examined the nature of multisensory speech information. In Experiment 1, participants were asked to match heard voices with dynamic visual-alone video clips of speakers' articulating faces. This cross-modal matching task was used to examine whether vocal source matching can be accomplished across sensory modalities. The results showed that observers could match speaking faces and voices, indicating that information about the speaker was available for cross-modal comparisons. In a series of follow-up experiments, several stimulus manipulations were used to determine some of the critical acoustic and optic patterns necessary for specifying cross-modal source information. The results showed that cross-modal source information was not available in static visual displays of faces and was not contingent on a prominent acoustic cue to vocal identity (f0). Furthermore, cross-modal matching was not possible when the acoustic signal was temporally reversed.

Original languageEnglish
Pages (from-to)159-187
Number of pages29
JournalEcological Psychology
Volume16
Issue number3
StatePublished - 2004

Fingerprint

Speech Perception
Acoustics
acoustics
information sources
experiment
Experiments
optics
Surgical Instruments
Cues
Optics
Display devices
speech
information source

ASJC Scopus subject areas

  • Psychology(all)
  • Experimental and Cognitive Psychology

Cite this

Crossmodal source identification in speech perception. / Lachs, Lorin; Pisoni, David.

In: Ecological Psychology, Vol. 16, No. 3, 2004, p. 159-187.

Research output: Contribution to journalArticle

@article{a99f301a07f64334854bc20c10eccd03,
title = "Crossmodal source identification in speech perception",
abstract = "Four experiments examined the nature of multisensory speech information. In Experiment 1, participants were asked to match heard voices with dynamic visual-alone video clips of speakers' articulating faces. This cross-modal matching task was used to examine whether vocal source matching can be accomplished across sensory modalities. The results showed that observers could match speaking faces and voices, indicating that information about the speaker was available for cross-modal comparisons. In a series of follow-up experiments, several stimulus manipulations were used to determine some of the critical acoustic and optic patterns necessary for specifying cross-modal source information. The results showed that cross-modal source information was not available in static visual displays of faces and was not contingent on a prominent acoustic cue to vocal identity (f0). Furthermore, cross-modal matching was not possible when the acoustic signal was temporally reversed.",
author = "Lorin Lachs and David Pisoni",
year = "2004",
language = "English",
volume = "16",
pages = "159--187",
journal = "Ecological Psychology",
issn = "1040-7413",
publisher = "Routledge",
number = "3",

}

TY - JOUR

T1 - Crossmodal source identification in speech perception

AU - Lachs, Lorin

AU - Pisoni, David

PY - 2004

Y1 - 2004

N2 - Four experiments examined the nature of multisensory speech information. In Experiment 1, participants were asked to match heard voices with dynamic visual-alone video clips of speakers' articulating faces. This cross-modal matching task was used to examine whether vocal source matching can be accomplished across sensory modalities. The results showed that observers could match speaking faces and voices, indicating that information about the speaker was available for cross-modal comparisons. In a series of follow-up experiments, several stimulus manipulations were used to determine some of the critical acoustic and optic patterns necessary for specifying cross-modal source information. The results showed that cross-modal source information was not available in static visual displays of faces and was not contingent on a prominent acoustic cue to vocal identity (f0). Furthermore, cross-modal matching was not possible when the acoustic signal was temporally reversed.

AB - Four experiments examined the nature of multisensory speech information. In Experiment 1, participants were asked to match heard voices with dynamic visual-alone video clips of speakers' articulating faces. This cross-modal matching task was used to examine whether vocal source matching can be accomplished across sensory modalities. The results showed that observers could match speaking faces and voices, indicating that information about the speaker was available for cross-modal comparisons. In a series of follow-up experiments, several stimulus manipulations were used to determine some of the critical acoustic and optic patterns necessary for specifying cross-modal source information. The results showed that cross-modal source information was not available in static visual displays of faces and was not contingent on a prominent acoustic cue to vocal identity (f0). Furthermore, cross-modal matching was not possible when the acoustic signal was temporally reversed.

UR - http://www.scopus.com/inward/record.url?scp=10844243735&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=10844243735&partnerID=8YFLogxK

M3 - Article

AN - SCOPUS:10844243735

VL - 16

SP - 159

EP - 187

JO - Ecological Psychology

JF - Ecological Psychology

SN - 1040-7413

IS - 3

ER -