Talker and lexical effects on audiovisual word recognition by adults with cochlear implants

Adam R. Kaiser, Karen Iler Kirk, Lorin Lachs, David B. Pisoni

Research output: Contribution to journalArticle

74 Citations (Scopus)

Abstract

The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Rα, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.

Original languageEnglish (US)
Pages (from-to)390-404
Number of pages15
JournalJournal of Speech, Language, and Hearing Research
Volume46
Issue number2
DOIs
StatePublished - Apr 1 2003

Fingerprint

Cochlear Implants
performance
Lipreading
Gestures
stimulus
Hearing
Cues
source of information
Recognition (Psychology)
Lexical Effect
Word Recognition
Cochlear Implant
Talkers
event
Group

Keywords

  • Audiovisual
  • Cochlear implants
  • Hearing impairment
  • Speech perception

ASJC Scopus subject areas

  • Rehabilitation
  • Health Professions(all)
  • Linguistics and Language

Cite this

Talker and lexical effects on audiovisual word recognition by adults with cochlear implants. / Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.

In: Journal of Speech, Language, and Hearing Research, Vol. 46, No. 2, 01.04.2003, p. 390-404.

Research output: Contribution to journalArticle

@article{8ae011cfea85434aad1188c343b1e5f0,
title = "Talker and lexical effects on audiovisual word recognition by adults with cochlear implants",
abstract = "The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Rα, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.",
keywords = "Audiovisual, Cochlear implants, Hearing impairment, Speech perception",
author = "Kaiser, {Adam R.} and Kirk, {Karen Iler} and Lorin Lachs and Pisoni, {David B.}",
year = "2003",
month = "4",
day = "1",
doi = "10.1044/1092-4388(2003/032)",
language = "English (US)",
volume = "46",
pages = "390--404",
journal = "Journal of Speech, Language, and Hearing Research",
issn = "1092-4388",
publisher = "American Speech-Language-Hearing Association (ASHA)",
number = "2",

}

TY - JOUR

T1 - Talker and lexical effects on audiovisual word recognition by adults with cochlear implants

AU - Kaiser, Adam R.

AU - Kirk, Karen Iler

AU - Lachs, Lorin

AU - Pisoni, David B.

PY - 2003/4/1

Y1 - 2003/4/1

N2 - The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Rα, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.

AB - The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Rα, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.

KW - Audiovisual

KW - Cochlear implants

KW - Hearing impairment

KW - Speech perception

UR - http://www.scopus.com/inward/record.url?scp=0037390684&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0037390684&partnerID=8YFLogxK

U2 - 10.1044/1092-4388(2003/032)

DO - 10.1044/1092-4388(2003/032)

M3 - Article

C2 - 14700380

AN - SCOPUS:0037390684

VL - 46

SP - 390

EP - 404

JO - Journal of Speech, Language, and Hearing Research

JF - Journal of Speech, Language, and Hearing Research

SN - 1092-4388

IS - 2

ER -