Specification of cross-modal source information in isolated kinematic displays of speech

Lorin Lachs, David B. Pisoni

Research output: Contribution to journalArticle

23 Scopus citations

Abstract

Information about the acoustic properties of a talker's voice is available in optical displays of speech, and vice versa, as evidenced by perceivers' ability to match faces and voices based on vocal identity. The present investigation used point-light displays (PLDs) of visual speech and sinewave replicas of auditory speech in a cross-modal matching task to assess perceivers' ability to match faces and voices under conditions when only isolated kinematic information about vocal tract articulation was available. These stimuli were also used in a word recognition experiment under auditory-alone and audiovisual conditions. The results showed that isolated kinematic displays provide enough information to match the source of an utterance across sensory modalities. Furthermore, isolated kinematic displays can be integrated to yield better word recognition performance under audiovisual conditions than under auditory-alone conditions. The results are discussed in terms of their implications for describing the nature of speech information and current theories of speech perception and spoken word recognition.

Original languageEnglish (US)
Pages (from-to)507-518
Number of pages12
JournalJournal of the Acoustical Society of America
Volume116
Issue number1
DOIs
StatePublished - Jul 1 2004

ASJC Scopus subject areas

  • Arts and Humanities (miscellaneous)
  • Acoustics and Ultrasonics

Fingerprint Dive into the research topics of 'Specification of cross-modal source information in isolated kinematic displays of speech'. Together they form a unique fingerprint.

Cite this