Retour à l'accueil
   
 
General presentation
 
 
Scientific Committee
 
 
Credits
 
 
Conference proceedings
 
 
Conference archives
 

Speaking Bodies : 2 Case Studies of Speech and Gesture in Parent Interaction with their Deaf Cochlear-Implanted Children

Tara K. Massimino
University of Chicago
tkmassimino@hotmail.com
 

Sarah B. Van Deusen Phillips
University of Chicago
sbvandeu@uchicago.edu

Abstract

Here we present descriptive case studies that investigate how two hearing Spanish Castilian-speaking parents use speech and gesture with their respective deaf children who have received cochlear implants (CI), or electronic prosthetic ears1. The situation of the CI provides us with an intriguing context in which to observe child-directed communicative behavior because deaf CI children experience a significant transition from silence to sound, which may cause parents to change their communicative strategies with their children over time, in order to better help them acquire speech. In order to investigate this possible change, we observed and analyzed parents' child-directed speech and gesture from the activation of their children's CIs until the children achieved one-word speech. The results of these analyses are compared to those for hearing parents of hearing children of the same age. We illustrate that there are few differences in the overall complexity of the child-directed speech and gesture that hearing parents of hearing and deaf CI children produce over the course of the change in the deaf CI children. However, there is an intriguing change in the distribution of the complexity of gestured utterances that parents use with their deaf CI children. Namely, the parents of the deaf CI children increase the complexity of their child-directed gesture as the children begin to speak, as compared to parents of hearing children, though the overall length of gestured utterances between the two groups of hearing parents remains roughly the same. This means that their gestures are being redistributed into fewer, more complex utterances rather than more numerous, simpler utterances. Taking the position that language is multimodal, we posit that the hearing parents of deaf CI children manipulate an integrated system of gesture and speech in order to provide their deaf children with increased opportunities to understand what people say to them, thereby increasing the probability that they will respond appropriately in speech.

Key words : Deaf Children, Cochlear Implant, Gesture, Parental Speech, Spain

1. Introduction

A wealth of language acquisition and socialization research illustrates that parents the world over provide their children with models, both linguistic and nonlinguistic, that enable them to enter into meaningful interaction and communication with other members of the social worlds that they are born into (Peck, 2000 ; Ochs, 1993 ; Schieffelin & Ochs, 1986). Although research in these areas has been largely predicated on parents and children fully sharing access to the same language (i.e., hearing-hearing or deaf-deaf parent-child dyads) not all parents and children share a language in common. As Goldin-Meadow & Saltzman (2000) have illustrated, for hearing parents of deaf children, the fact that they do not share equal access to a shared language may have consequences for how parents engage in child-directed gesture. Namely, when compared with hearing parents of hearing children, hearing parents of deaf children exhibited a trend of producing more speech-accompanying gestures when talking to their children. These investigators conclude that, by producing a greater frequency of gesture, the parents of the deaf children may be compensating, to some extent, for their children's inability to hear.

We wish to expand on this finding by postulating that parents take advantage of gesture and speech as an integrated system (McNeill, 1992) when structuring linguistic interaction with their children so that they can manipulate either behavior in order to increase the chances that their children successfully engage in communication when their access to the parents' language is compromised. To this end, we present two case studies focused on the child-directed speech and gesture that two unrelated Spanish hearing parents produce when interacting with their respective congenitally deaf children. However, unlike the children in Goldin-Meadow & Saltzman's study, these children have received cochlear implants (CI)2, or prosthetic electronic ears that will allow them to learn to hear and speak Castilian Spanish. Our goal is to determine whether these two parents change their child-directed speech and/or gesture as their respective child's impoverished access to a shared language improves over time. If they do, we would have evidence that parents take advantage of the flexibility of language as a multimodal system in order to accommodate the varying communicative needs of their children, thus facilitating the children's communication as much as possible in a constrained situation.

2. Background

The two parents featured here are both native monolingual speakers of Castilian Spanish who elected (with their spouses) to have their respective congenitally deaf children receive cochlear implants in the hope that they will learn to participate fully in the hearing world. In keeping with this desire, these parents also elected to enroll their children in oral education programs from the initial discovery of their deafness. These programs do not teach deaf children formal sign language, with the result that, at least initially, the children do not have equal access to a language shared with their parents or educators. Once the children received their cochlear implants, the oral programs were further augmented by extensive language habilitation, the goal being to facilitate their transition from deafness to electronically mediated hearing3.

Because oral language is very difficult for congenitally deaf children to successfully master before they fully learn to hear with their implants, the deaf CI children of the parents we focus on here initially depend largely on homesign systems4 that they develop for communicating with hearing people around them. Further, speech therapists encourage parents to use supportive signs to aid the children in learning to produce spoken words. Therefore, given the centrality of gesture for these children in communicating with their respective families, we analyze both the gesture and speech that the parents produce in their child-directed communication.

This is not to say that these parents are unique in integrating speech and gesture in their child-directed communicative behavior. The fact that people gesture when they speak is a particularly salient aspect of communication that is readily evident in everyday life. All one need do is look around at people interacting with each other to see that we are constantly moving our hands and bodies in meaningful ways as we speak. Because of this, gestures have been described and quantified in an expanding body of research concerning language, cognition, and meaning across a number of fields, including psychology and anthropology (Efron, 1972 ; Enfield, 2005 ; Goldin-Meadow, 2003 ; Kendon, 2004 ; McNeil, 1992 ; Tyler, 1964). Further evidence of the growing recognition of the importance of gesture in interaction is evident in the fields of language development and socialization, where investigators have recently begun to explicitly study the role of nonverbal communication in shaping how children learn to engage in and structure language production and social interaction (Garrett & Baquedano-López, 2002 ; De León, 2000). Taken together, these fields of study illustrate that language is more than just speech, that in reality it is multimodal in its incorporation of speech, gesture, body stance, facial expression, and many other communicative behaviors that carry social meaning.

Recognizing that communication is multimodal, it is important to understand how parents use speech and gesture with their children under normal circumstances. Striking evidence from the past thirty years suggests that, at least in Western cultures, parents modify their spoken language for young children by making their speech more repetitive, slower, and less grammatically complex (Snow 1972 ; Newport et al. 1977 ; 1984), while subsequent research illustrates that parents also modify their gestures when interacting with their young children. For example, Bekken (1989) demonstrates that parents produce child-specific gestures when interacting with their young children. Therefore, parents not only produce a child-directed register in speech, they also do so in gesture and researchers have hypothesized that these registers aid children in acquiring language.

But what is the relationship between parents' child-directed speech and gesture behavior and child language acquisition ? A recent study by Goodrich (2003) addressed this question and found that, contrary to her expectations, as children moved from one- to two-word speech, their mothers' child-directed communication became more complex, rather than the other way around. That is, mothers seemed to respond to the emergence of two-word speech in their children, rather than causing it. In terms of child-directed gesture, on the other hand, Goodrich found that mothers' gesture activity remained constant over the course of the change in the children, while the children's speech and gesture productions became more complex. In sum, parents change their child-directed speech, but not their gesture, in response to changes in a child's language production.

For deaf children, the story is a bit different. As noted in the introduction, Goldin-Meadow & Saltzman (2000) illustrate that hearing parents of deaf children showed a trend of producing higher rates of gesture in communicating with their children as compared to hearing parents of hearing children. Additionally, they found that the hearing parents of deaf children produce significantly more gestured requests for their children's attention than the hearing parents of hearing children did. It seems, then, that parents of deaf children use frequency of gesture as a means of compensating for their children's deafness when seeking to engage them in interaction. However, as work by Goldin-Meadow & Mylander (1998) illustrates, the gestures that hearing parents of deaf children produce in interaction with their children have little relation to the development of the children's homesign systems. Their gestures neither drive, nor respond to changes in their children's gestures. It appears, then, that although parents increase the frequency of their gestures in response to the child's deafness, they do not modify them in ways that are reflected in, or reflective of, the development of the children's homesign systems.

3. Predictions

Given the patterns of child-directed speech and gesture described above for hearing and deaf children, the case of parents communicating with cochlear implanted children becomes an intriguing puzzle. In this situation, parents are first parents of deaf children, but later become parents of "hearing" children. Due to the significant change that implanted deaf children potentially experience in their ability to hear and speak, the question becomes whether or not their hearing parents' speech and/or gesture also undergo changes as they potentially respond to changes in their children. Following Goodrich, if changes in hearing children's speech complexity drive reciprocal changes in hearing parents' child-directed speech, then we might expect to see the same in the hearing parents of deaf CI children as the children gain the ability to speak. Furthermore, given that Goodrich found no significant changes in child-directed gesture as hearing parents responded to their hearing children's increase in speech complexity, we would expect the same for the hearing parents of deaf CI children. However, following Goldin-Meadow & Saltzman's observation that hearing parents of deaf children gesture more frequently with their children than hearing parents of hearing children do, then we would expect that child-directed gesture would decrease in the deaf CI case as the children's speech ability increases because the parents would have less need to compensate for their children's deafness.

4. Methods

4.1 Participants

The current study investigates the speech and gesture production of 2 hearing Spanish Castilian-speaking parents of 2 deaf children with cochlear implants (CI), Marisol and Juan5. Marisol is the mother of a deaf son who was two years old at the time he received his implant, which is also when we began collecting videotaped observations of this family in their home. She and her husband are both government functionaries for the provincial government of Castilla-Leon in Valladolid, a city in north central Spain and the family is comfortably upper middle class. Juan is the father of a deaf daughter who received her implant just before we began observing her, which was when she was just over three years of age. Juan is a member of the Spanish military and his wife is a house wife. The family lives in Badajoz, a town on the western border of Spain with Portugal in the province of Extremadura, and is less well-off than Marisol's family, though still middle class. We chose to focus on Marisol and Juan in our study because analyses currently underway by Goldin-Meadow and her colleagues illustrate that their children are strong communicators who interacted frequently with their parents during videotaped observations. In both cases, Marisol and Juan are the parents who principally engage their children in interaction during these recordings, therefore they are the foci of our analyses.

The speech and gesture data that we coded for these two parents were selected from a larger body of data collected longitudinally over the course of 10 months in Spain from 2001-2002 by the second author. In addition, we coded speech and gesture data collected from 4 hearing parents (3 hearing mothers and 1 hearing father) of 4 hearing children during the same period. These data were collected cross-sectionally and serve as a control for the families of deaf children. In the analyses that follow, the sample of hearing parents of hearing children are referred to as the "hearing control".

The deaf children and their parents were video recorded in their homes once a month for 10 months, while the hearing families were only recorded once. Each session lasted one and a half to two hours and included a period in which children and parents engaged in spontaneous play with toys and books provided by the investigators. We selected three sessions to code for the parents of deaf children that corresponded to important changes in their children's hearing and speaking abilities as they adapted to their implants. We then selected the parents of the hearing control group based on the ages of their children at the time that their single sessions were recorded. This was to ensure that we had comparison data that corresponded to the ages of the deaf CI children at each developmental point considered longitudinally. For each videotaped session, we coded the speech and gesture that the parents produced during 20-25 minutes of interaction with their respective children for each of the videotaped sessions selected for analysis. None of the results we present here are statistically significant.

4.2. Procedures

Speech and gesture for Marisol and for Juan were coded at 3 points during their deaf CI children's progress with their cochlear implants. Stage 1, or Activation, refers to the initial activation of the children's implants, or shortly thereafter. At this point, the children begin to hear sound, but the volume of the input they receive is not sufficient for them to distinguish language. Stage 2, or the Preverbal stage, corresponds to a point four months after activation, but before children begin to produce speech. By this time, the children's implants have been augmented monthly so that they increasingly perceive auditory input, but they are not yet speaking. Stage 3, or the Verbal stage, which is three to four months after the data point for the Preverbal stage, refers to the point at which each child has begun to produce one-word speech. Table 1.1 (below) displays the names of the hearing parents of deaf CI children who are the focus of our study, the 3 developmental stages of their deaf CI children, and their children's ages at each developmental stage.

 

The four hearing parents of hearing children were selected according to their hearing children's ages, which were matched to the ages of each of the deaf CI children at their Activation and Verbal stages (note that the deaf children differ in age by a year for activation of their implants). As the hearing children were well past the two word stage when we collected data for them, there was no reason to expect significant changes in the parents' gesture behavior between the first and third data points, and so we did not code a middle stage for this group.

The analyses that we present for both parent samples are based on the coded transcriptions of all of speech and gesture that the parents produced during 20-25 minutes of spontaneous play interaction with their children. Speech and gesture data were coded in FileMaker Pro, a comprehensive relational database that allows for qualitative and quantitative coding, while numeric analyses of the coded data were conducted using Microsoft Excel. All of the results we present here are descriptive and none of them are statistically significant.

Utterances in both speech and gesture were coded for mean length of utterance and utterance complexity in terms of clauses for speech, and numbers of gestures in a string for gesture. Note that a gestured utterance consists of a single gesture, or string of gestures, bounded by hands beginning in a relaxed or neutral position, moving into and through the gesture or string of gestures, and returning to a relaxed or neutral position.

4.3. Speech measures

Measures for assessing speech include a mean length of (spoken) utterance (MLSU) and an assessment of the types of speech clauses found in hearing parents' spoken communication. MLSU was calculated by dividing the total number of words by the total number of spoken utterances produced by each parent. This measure was based on criteria for calculating MSLU in the Spanish language by a word count taken from Wieselman Schulman (2004). Once we established the MLSU for the parents' speech production, we categorized their utterances in terms of three types of speech clauses : "non-clausal" (spoken utterances that do not contain a verb, e.g. word labels, prepositional phrases), "simple clauses" (utterances that contain only a single verb or action, e.g., "give it to me" ; "look"), and "complex clauses" (utterances that contain more than one verb or action, e.g., "let me do it" ; "I will help you roll play-doh"). Proportions of speech clause types were calculated by taking the total number of a speech clause type divided by the total number of spoken utterances. For both MLSU and the proportions of speech-clause types, measures for the hearing parents of deaf CI children were calculated for each parent individually. Measures for the hearing control group were calculated first for each individual, and then by taking a mean of the individual results.

4.4. Gesture measures

Measures for assessing the length and complexity of gesture utterances include a mean length of gesture utterance (MLGU) and an assessment of gesture string complexity. MLGU was calculated by taking the total number of gestures divided by the total number of gesture utterances. Gesture utterances were categorized as consisting of one sign, 2 sign, and >2 sign. For both MLGU and the proportions of gesture utterance types, measures for the hearing parents of deaf CI children were calculated for each parent individually. Measures for the hearing control group were calculated first for each individual, and then by taking a mean of the individual results.

5. Results I : Baseline Speech and Gesture Production

The main question under investigation in this study was whether or not hearing parents of deaf CI children change their child-directed speech and gesture as their children begin to develop their verbal capacities with their implants. Here we begin to answer this question by comparing the gesture and speech productions of hearing parents of deaf CI children at the activation stage with those of the hearing parents of age-matched hearing children. The analyses presented below establish a baseline from which to assess possible change in how the parents use speech and gesture with their children over time. The results indicate that at our first Activation data point, there are no real differences in the amount of complexity of the child-directed speech and gesture that hearing parents of hearing or deaf CI children produce.

5.1. Results for speech measures at Activation

With regards to speech, the results of the mean length of spoken utterance (MLSU) analysis illustrated below indicate that Marisol and Juan, the hearing parents of deaf CI children, though they tend to use slightly less complex speech than the hearing parents of hearing children when engaging their children in interaction, do not differ to any great extent from the hearing control. Figure 1.1 displays the MLSU for both hearing parents of deaf children (Marisol with pink bars and Juan with green) and the hearing control (blue bars) at the Activation stage.

 

Our second measure looks at speech complexity by categorizing spoken utterances into the speech utterance types discussed above (non-clausal, simple clause, or complex clause). Figure 1.2 (below) illustrates the proportions each speech clause type produced by hearing parents of CI children and the hearing controls at the Activation stage.

 

 

Whereas the hearing control seems to favor single clause speech utterances (red bars) over both non-clausal (blue bars) and complex clause (yellow bars) speech utterances, Marisol (pink figure) and Juan (green figure) seem to favor non-clausal speech utterances over single clause utterances, which is the reverse of the pattern in the hearing control. Also note that neither hearing parent of deaf CI children uses complex clauses, while the hearing control group does, although not at a very high rate. However, similar to what we observed in Figure 1.1, the differences between the hearing control and hearing of deaf parents are not great. With the exception of the use of complex clauses, and possible differences in preference for non-clausal or single clause utterances, all of the parents look roughly similar overall in their use of non-clausal and simple-clause utterances at the Activation stage.

5.2. Results for gesture measures

With respect to gesture, the mean length of gesture utterance (MLGU) analysis illustrates another similarity in the child-directed communication of hearing parents of deaf children and the hearing control group. Namely, they look similar in the length of gestured utterance that they produce at the Activation data point. Recall that a gestured utterance begins when hands lift from a neutral or relaxed position, moves through the gesture or gestures, and ends when the hands relax once more. Figure 1.3 (below) illustrates MLGU rates for the hearing parents of deaf children and the hearing control at the activation stage. Note that, as in the speech measures above, there are no real differences between the two sets of parents in terms of length of gestured utterances.

Figure 1.3

 

 

Our second gesture analysis addresses the complexity of child-directed gestured utterances that parents produce by categorizing them according to one of the gesture combinations following utterance types : one sign, 2 signs, or > 2 signs (N.B. in these figures, "signs" is equivalent to "gesture"). Figure 1.4 displays the gesture utterance combinations for hearing parents of deaf children and the hearing control at the Activation stage.

 

Here, we see that, overall, hearing parents, regardless of child hearing status, show a strong preference for one sign gesture utterances. Furthermore, there appears to be very little variation in the use of 2 sign or >2 sign gesture combinations among the two samples of hearing parents.

To summarize then, at the Activation stage, hearing parents of deaf CI children and hearing parents of hearing children look similar in the length and complexity of both child-directed speech and gesture utterances that they produce, regardless of the hearing status of their children.

6. Results II : Measures of speech and gesture production over time

Above, we have established that there are no large differences in the length or complexity of the utterances hearing parents of hearing and deaf children produce in either speech or gesture at the activation of the CIs in the deaf children. We now turn our attention to changes in the same measures over time. For these analyses, we look at the speech and gesture that hearing parents of deaf CI children produced at the activation, preverbal, and verbal developmental points, as defined in the methods section. The data for these parents are compared to the speech and gesture produced by the parents of hearing children. Recall that cross-sectional data for the hearing control were selected by matching each hearing child's age to the ages of the deaf children at the developmental data points being considered here for the deaf CI children.

6.1. Results for speech measures

In terms of speech, the MLSU and speech clause types analysis for each developmental point under consideration indicate that neither sample of hearing parents changes the length or complexity of their spoken utterances over time. Figure 2.1 (below) depicts the rates for MLSU for the hearing control parents and Juan at the Activation and Verbal stages (Juan in green, and the age-matched hearing control in blue), while Marisol's data (pink bars) appear for all three developmental stages. Unfortunately, we did not have a sufficient amount of gesture and speech activity for Juan at the preverbal stage to analyze a midpoint, so we were forced to drop that point out of the analysis of his child-directed communication. However, we retain the data point for Marisol to give us a fuller picture of how hearing parents of deaf CI children might be responding to changes in their children as they learn to use their implants.

 

 

The data in Figure 2.1 show that neither hearing parents of deaf CI children nor the parents in the control group greatly change their MLSU over time.

The second analysis of speech clause types also suggests that neither group of hearing parents changes the complexity of their speech over time. Figure 2.2 illustrates the proportion of speech type clauses produced by all of the hearing parents.

 

 

The patterns depicted in Figure 2.2 indicate that neither sample of hearing parents changes their use of non clausal (blue bars) and single clause (red bars) utterances over time. Although Marisol and Juan begin to show evidence of using complex clauses (yellow bars) in speech, the number is small. However, Marisol's and Juan's use of complex clauses at the verbal data point looks similar to that of the hearing parents of hearing children. Thus, it appears that, in general, the complexity for all of the hearing parents remains more or less the same over time, regardless of their children's hearing status.

Interestingly, these results appear to differ from Goodrich's (2003) finding that hearing parents of hearing children respond to increased complexity in their children's speech by increasing the complexity in their child-directed speech. Whereas Marisol and Juan do begin to bring in complex utterances in speech over time, these changes are not statistically significant, and their speech production at the third data point does not look different from the hearing parents. Therefore, it is difficult to determine if they are indeed changing in response to changes in their deaf CI children's emerging ability to speak, but more analyses are required over a longer period and more parents to determine if this is the case.

6.2. Results for gesture measures

However, when we turn to look at what the parents are doing in gesture over developmental time, a surprising pattern emerges. Contrary to the results yielded by the speech complexity analyses, the gesture complexity analyses suggest that the complexity of Marisol's and Juan's gestured utterances changes over time, whereas the gesture complexity of the hearing control does not. The analysis of MLGU depicted below in Figure 2.3 shows evidence of an increase in the length of gesture utterances over time for Marisol and Juan and no similar increase for the hearing control.

 

Whereas the hearing control remains consistent in their MLGU over time, Juan and (especially) Marisol show evidence of increasing their number of gestures per gesture utterance as their deaf CI children begin to verbalize.

The changes in gesture complexity are more clearly represented in the analysis of gesture string combinations. Figure 2.4 depicts the gesture string combinations for all of the hearing parents.

 

 

As in the MLGU analysis, the members of the hearing control remain quite consistent in the proportions of gestured utterance types that they produce over time. In contrast, Marisol and Juan show clear trends of change in the proportions of one sign, 2 sign and >2 sign gesture utterances that they produce. In particular, the hearing parents of deaf CI children decrease in their use of one sign gesture utterances, while showing an increase in their use of both 2 sign and >2 sign utterances, meaning that, by the verbal stage, they are perhaps more likely to string two or more gestures together in an utterance than their hearing control counterparts. Thus, it appears as if both Marisol and Juan begin to concentrate their gestures in more complex utterances as their deaf CI children begin to verbalize. Therefore, we postulate that the parents of the deaf children might be responding to their children's emerging ability to hear and speak by recruiting gesture as an additional means of helping their children to understand what they are saying and produce their own contributions to the interaction in speech.

7. Conclusions

The cases we present here provide a unique context in which to observe child-directed communicative behavior because deaf CI children experience a significant transition from silence to sound that is reflected in their emergent ability to speak. In designing our approach to understanding these parents' strategies for communicating with their children, we have aligned ourselves with a growing body of qualitative and quantitative approaches to interaction that recognize that gesture and speech work in tandem in a multimodal language system to structure interaction and communicate information between interlocutors. Working from this perspective, we illustrate how parents might take advantage of the flexibility in such a multimodal system in order to manipulate how information is encoded and presented in response to their perceptions of the communicative abilities of their children.

Although the data presented here are based on two case studies and are purely descriptive, we see that there are few differences in the length and complexity of spoken and gestured utterances that hearing parents of deaf CI children and hearing parents of hearing children produce when communicating with their respective children. However, when we look at how the proportion of gesture utterance types (one sign, 2 sign, or >2 sign) changes over time for the two groups of parents, we see an intriguing pattern emerge. Whereas the parents of deaf and hearing children are generally producing the same overall length of gestured utterances in terms of MLGU, we see that the distribution of gesture utterance types within that overall production differs for the two groups. Parents of hearing children consistently produce the same proportions of one sign, 2 sign, and >2 sign gesture utterances over time, but the parents of the deaf children begin to concentrate their gestures into longer strings (2 signs and > 2 signs) as their children progress with their implants. We hypothesize that this change might be in response to the changes that parents perceive in their children's emerging ability to speak, but unlike what Goodrich found with hearing-hearing dyads, the changes in the deaf CI children push an increase in the complexity of their parents' child-directed gesture rather than speech.

The trends of increase that we see in how Marisol and Juan package gestures in strings within their gestured communication also provide evidence of the remarkable flexibility of language as a multimodal communicative system. Recognizing that language is more than simply speech allows us to observe how parents might recruit a variety of strategies to ensure that they are successfully communicating with their children. Because speech and gesture are an integrated system for hearing people, it might not be difficult for parents to manipulate their gesture when they feel that their children do not fully understand what they are saying, as in the case of deaf children who are learning to hear with a cochlear implant. Therefore, the trends of increase in more complex gesture strings over time in Marisol's and Juan's gestured communication might be interpretable as an effort to increase the probability of successful communication with their deaf CI children. In short, these parents could be increasing the complexity of their gestures in an attempt to provide their children with every opportunity to understand what is being communicated to them in speech by providing support in gesture, thus increasing the probability that their children will respond appropriately in speech.

Sources Cited

Bekken, K. E. (1989). Is there "Motherese" in gesture ?. Unpublished doctoral dissertation. University of Chicago.

De León, L. (2000). The emergent participant : Interactive patterns in the socialization of tzotzil (Mayan) Infants. Journal of Linguistic Anthropology, 8(2), 131-161.

Eddington, D. K. & Pierschalla, M. L. (1994). Cochlear implants : Restoring hearing to the deaf. On the Brain : The Harvard Mahoney Neuroscience Institute Letter, 3 (4). http://www.med.harvard.edu/publications/On_The_Brain/Volume03/

Number4/Cochlear.html. Last accessed 13 December 2006.

Efron, D. (1972). Gesture, Race and Culture. Paris : Mouton.

Enfield, N. J. (2005). The body as a cognitive artifact in kinship representations : Hand gesture diagrams by speakers of Lao. Current Anthropology, 46(1), 51-81.

Garrett, P. B., & Baquedano-López, P. (2002). Language socialization : Reproduction and continuity, transformation and change. Annual Review of Anthropology, 31, 339-361.

Goodrich, W. (2003). Are mothers responsible for the onset of two-word speech ? The role of variable input in early gesture and language development. Unpublished MA paper, University of Chicago.

Goldin-Meadow, S. (2003). The Resilience of Language : What Gesture Creation Can Tell Us About How All Children Learn Language. New York : Psychology Press.

Goldin-Meadow, S., & Mylander, C. (1998). Spontaneous sign systems created by deaf children in two cultures. Nature, 391, 79-281.

Goldin-Meadow, S., & Saltzman, J. (2000). The cultural bounds of maternal accommodation : How Chinese and American mothers communicate with deaf and hearing children. Psychological Science, 11, 311-331.

Kendon, A. (2004). Gesture : Visible Actions as Utterance. New York : Cambridge University Press.

Loizen, P. C. (1998). Mimicking the human ear : An overview of signal-processing strategies for converting sound into electrical signals in cochlear implants. ISSS Signal Processing Magazine, September, 101-130

McNeill, D. (1992). Hand and Mind : What Gestures Reveal About Thought. Chicago : University of Chicago Press.

Newport, E., Gleitman, L., & Gleitman, H. (1977). Mother, I'd rather do it myself : Some effects and non-effects of maternal speech style. In Catherine E. Snow and Charles A. Ferguson (Eds). Talking to Children (pp 109-149). New York : Cambridge University Press.

Newport, E., Gleitman, L., & Gleitman, H. (1982). The current status of the motherese hypotheses. Journal of Child Language, 11, 43-79.

Ochs, E. (1993). Constructing social identity : A language socialization perspective. Research on Language Socialization, 26(3), 287-306.

Peck, J. J. (2000). The mutual process of semioticization : Linguistic acquisition

and performance of social subjectivities. Australian Journal of Linguistics, 20 (2),

179-209.

Schieffelin, B., & Ochs, E. (1986). Language socialization. Annual Review of Anthropology, 15, 163-191.

Snow, C. (1972). Mothers' speech to children learning language. Child Development, 43, 549-565.

Tyler, E. B. (1964). Researches into the early history of mankind and the development of civilization. Chicago : The University of Chicago Press.

Wieselman Schulman, B. (2004). A crosslinguistic investigation of the speech gesture relationship in motion event descriptions. Unpublished doctoral dissertation, the University of Chicago.

 Notes

1 The data which we analyze for this grant were collected with funding from the National Institutes of Health, NHI Grant RO1-DC00491awarded to Susan Goldin-Meadow at the University of Chicago. We give special thanks to Susan Goldin-Meadow for the use of the data we've analyzed and for her valuable input in the development of the project and results presented here.

2 The cochlear implant (CI) is essentially an electronic prosthetic ear that directly stimulates the auditory nerve by bypassing the nonfunctional parts of an otherwise healthy auditory system. The CI is surgically installed subcutaneously behind one of the ears and an array of tiny electrodes is threaded through the snail-shaped cochlea in the inner ear. From there, the electrodes bypass non-functioning hair cells that would otherwise transmit an electrical impulse from the ear to the auditory nerve, but fail to do so in the most common form of congenital deafness. The array replaces the function of the hair cells by transmitting electrical impulses across the cochlear membrane to directly stimulate the auditory nerve so that the brain receives auditory input, though impoverished as compared to natural hearing. The electrical signals that the implanted electrodes transmit is filtered and mediated by a small computerized processor that resembles a hearing aid. It converts sound waves into electrical impulses that are carried by the array to the auditory nerve (c.f., Loizen, 1998. See Eddington & Pierschalla, 1994 for a diagram of the cochlear implant. Last accessed 13 December 2006).

3 This effort is coordinated between teachers, parents, speech therapists, psychologists, audiologists and surgeons.

4 Homesign systems are idiosyncratic gesture systems that orally educated deaf children create in order to communicate with the hearing people around them.

5 Participants have been assigned pseudonyms to protect their confidentiality.