Cross-Modal Source Information and Spoken Word Recognition

Lorin Lachs, David Pisoni

Research output: Contribution to journalArticle

19 Citations (Scopus)

Abstract

In a cross-modal matching task, participants were asked to match visual and auditory displays of speech based on the identity of the speaker. The present investigation used this task with acoustically transformed speech to examine the properties of sound that can convey cross-modal information. Word recognition performance was also measured under the same transformations. The authors found that cross-modal matching was only possible under transformations that preserved the relative spectral and temporal patterns of formant frequencies. In addition, cross-modal matching was only possible under the same conditions that yielded robust word recognition performance. The results are consistent with the hypothesis that acoustic and optical displays of speech simultaneously carry articulatory information about both the underlying linguistic message and indexical properties of the talker.

Original languageEnglish
Pages (from-to)378-396
Number of pages19
JournalJournal of Experimental Psychology: Human Perception and Performance
Volume30
Issue number2
StatePublished - Apr 2004

Fingerprint

Linguistics
Acoustics
Spoken Word Recognition
Word Recognition
Sound
Indexicals
Hearing
Talkers
Spectrality
Optical
Formant Frequencies

ASJC Scopus subject areas

  • Cognitive Neuroscience
  • Experimental and Cognitive Psychology

Cite this

Cross-Modal Source Information and Spoken Word Recognition. / Lachs, Lorin; Pisoni, David.

In: Journal of Experimental Psychology: Human Perception and Performance, Vol. 30, No. 2, 04.2004, p. 378-396.

Research output: Contribution to journalArticle

@article{c6d7312d14364a39b7b3e40f3f14a0ef,
title = "Cross-Modal Source Information and Spoken Word Recognition",
abstract = "In a cross-modal matching task, participants were asked to match visual and auditory displays of speech based on the identity of the speaker. The present investigation used this task with acoustically transformed speech to examine the properties of sound that can convey cross-modal information. Word recognition performance was also measured under the same transformations. The authors found that cross-modal matching was only possible under transformations that preserved the relative spectral and temporal patterns of formant frequencies. In addition, cross-modal matching was only possible under the same conditions that yielded robust word recognition performance. The results are consistent with the hypothesis that acoustic and optical displays of speech simultaneously carry articulatory information about both the underlying linguistic message and indexical properties of the talker.",
author = "Lorin Lachs and David Pisoni",
year = "2004",
month = "4",
language = "English",
volume = "30",
pages = "378--396",
journal = "Journal of Experimental Psychology: Human Perception and Performance",
issn = "0096-1523",
publisher = "American Psychological Association Inc.",
number = "2",

}

TY - JOUR

T1 - Cross-Modal Source Information and Spoken Word Recognition

AU - Lachs, Lorin

AU - Pisoni, David

PY - 2004/4

Y1 - 2004/4

N2 - In a cross-modal matching task, participants were asked to match visual and auditory displays of speech based on the identity of the speaker. The present investigation used this task with acoustically transformed speech to examine the properties of sound that can convey cross-modal information. Word recognition performance was also measured under the same transformations. The authors found that cross-modal matching was only possible under transformations that preserved the relative spectral and temporal patterns of formant frequencies. In addition, cross-modal matching was only possible under the same conditions that yielded robust word recognition performance. The results are consistent with the hypothesis that acoustic and optical displays of speech simultaneously carry articulatory information about both the underlying linguistic message and indexical properties of the talker.

AB - In a cross-modal matching task, participants were asked to match visual and auditory displays of speech based on the identity of the speaker. The present investigation used this task with acoustically transformed speech to examine the properties of sound that can convey cross-modal information. Word recognition performance was also measured under the same transformations. The authors found that cross-modal matching was only possible under transformations that preserved the relative spectral and temporal patterns of formant frequencies. In addition, cross-modal matching was only possible under the same conditions that yielded robust word recognition performance. The results are consistent with the hypothesis that acoustic and optical displays of speech simultaneously carry articulatory information about both the underlying linguistic message and indexical properties of the talker.

UR - http://www.scopus.com/inward/record.url?scp=1842589295&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=1842589295&partnerID=8YFLogxK

M3 - Article

VL - 30

SP - 378

EP - 396

JO - Journal of Experimental Psychology: Human Perception and Performance

JF - Journal of Experimental Psychology: Human Perception and Performance

SN - 0096-1523

IS - 2

ER -