Retour à l'accueil
   
 
General presentation
 
 
Scientific Committee
 
 
Credits
 
 
Conference proceedings
 
 
Conference archives
 

18 - Attina, V, Cathiard, M-A, Beautemps, D (Grenoble)

Session : Prosody 1

18 - Attina, V, Cathiard, M-A, Beautemps, D (Grenoble) : "French Cued Speech production : giving a hand to speech"

jeudi 16 juin- 11h00-11h30
(F08)


-  Attina, Virginie
-  Cathiard, Marie-Agnès
-  Beautemps, Denis

(Institut de la Communication Parlée, UMR CNRS 5009, Grenoble)

French Cued Speech production : giving a hand to speech

Human speech is multimodal by nature : it is now generally accepted that the perception of the acoustic speech signal is largely enhanced by visible speech face and lips gestures. In addition, non-verbal spontaneous gestures are an integral part of language (McNeill, 1992). The coproduction of sound, face and manual gestures allows optimal communication and understanding. The mechanisms underlying the relationships between gestures and speech are at the center of many studies. Concerning timing, the gestures onset is found to precede speech emission.

This study aims to extend the “sound-gestures” timing relationship retrieved for spontaneous speech as detailed previously to additional hand gestures necessary to improve oral speech reception for deaf people. This work focuses on French Cued Speech production. Cued Speech (CS ; Cornett, 1967) is an effective system that uses hand cues placed near the face of the speaker to disambiguate lip shapes. While uttering, the speaker codes each consonant-vowel (CV) syllable with a manual cue : the shape of the hand distinguishes among consonants and its position around the face is devoted to vowel disambiguation. In order to enlarge the understanding of CS effectiveness (demonstrated for speech perception and for language acquisition), it is important to know how CS is produced.

Four trained French CS speakers have been video recorded while uttering and coding CV syllabic sequences. Data are obtained by video tracking of hand and oral gestures in order to extract signal features related to lip contours and hand movements. Lip area, x and y hand position and acoustic signal are manually labeled at onsets and offsets times. Results reveal a similar temporal pattern for the 4 subjects : the hand gesture onset precedes the syllable onset and the hand reaches its cue position during the first part of the consonant, so well before the vowel labial target.