Objective: The purpose of this case study was to investigate multimodal perceptual coherence in speech perception in an exceptionally good postlingually deafened cochlear implant user. His ability to perceive sinewave replicas of spoken sentences, and the extent to which he integrated sensory information from multimodal sources was compared with a group of adult normal-hearing listeners to determine the contribution of natural auditory quality in the use of electrocochlear stimulation. Design: The patient, "Mr. S," transcribed sinewave sentences of natural speech under audio-only (AO), visual-only (VO), and audio-visual (A+V) conditions. His performance was compared with the data collected from 25 normal-hearing adults. Results: Although normal-hearing participants performed better than Mr. S for AO sentences (65% versus 53% syllables correct), Mr. S was superior for VO sentences (43% versus 18%). For A+V sentences, Mr. S's performance was comparable with the normal-hearing group (90% versus 86%). An estimate of the amount of visual enhancement, R, obtained from seeing the talker's face showed that Mr. S derived a larger gain from the additional visual information than the normal-hearing controls (78% versus 59%). Conclusions: The findings from this case study of an exceptionally good cochlear implant user suggest that he is perceiving the sinewave sentences on the basis of coherent variation from multimodal sensory inputs, and not on the basis of lipreading ability alone. Electrocochlear stimulation is evidently useful in multimodal contexts because it preserves dynamic speech-like variation, despite the absence of speech-like auditory qualities.
ASJC Scopus subject areas
- Speech and Hearing