Audio-visual speech perception without speech cues

Helena M. Saldana, David Pisoni, Jennifer M. Fellowes, Robert E. Remez

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations


A series of experiments was conducted in which listeners were presented with audio-visual sentences in a transcription task. The visual components of the stimuli consisted of a male talker's face. The acoustic components consisted of: natural speech; enveloped-shaped noise which preserved the duration and amplitude of the original speech waveform and various type of sinewave speech signals that followed the formant frequencies of a natural utterance. Further experiments demonstrated that the intelligibility of single tones increased differentially depending on which formant analog was presented. The increase in intelligibility for the sinewave speech with an added video display would be greater than the gain observed with envelope-shaped noise.

Original languageEnglish
Title of host publicationInternational Conference on Spoken Language Processing, ICSLP, Proceedings
Editors Anon
Number of pages4
StatePublished - 1996
EventProceedings of the 1996 International Conference on Spoken Language Processing, ICSLP. Part 1 (of 4) - Philadelphia, PA, USA
Duration: Oct 3 1996Oct 6 1996


OtherProceedings of the 1996 International Conference on Spoken Language Processing, ICSLP. Part 1 (of 4)
CityPhiladelphia, PA, USA

ASJC Scopus subject areas

  • Computer Science(all)

Fingerprint Dive into the research topics of 'Audio-visual speech perception without speech cues'. Together they form a unique fingerprint.

Cite this