“The Celestial Openness”

“The Celestial Openness” (1)

By: Mohammad Khari

[1] referring to the child’s mind by romantic writers and poets quoted by Kuhl in her talk

“Sound is the vocabulary of nature.”
Pierre Schaeffer
French composer

As a language teacher, every now and then, you might ponder on some fundamental questions, no matter how long you have been in this profession. Every few weeks, there are new research-based findings and theories coming out, thanks to the advance of technology and the engineering marvels of fMRI and EEG. These findings enable us to form a more comprehensive understanding of how we learn in general, how we learn our mother tongue, how one acquires a second language, how and when babies can distinguish different sounds, what is the critical age for faster language acquisition, or why we learn better in the company of others. It is not possible to discuss all of these questions in one article, but I will try to go over some amazing work done mostly by Patricia K. Kuhl, Ph.D. who laid the foundation for further studies and research helping us understand when and how babies distinguish sounds, how the sound patterns are formed, and how early stages of a baby’s development affect their future language abilities.

In her amazing talk, Kuhl explains a “critical period,” why we experience a systematic decline in the ability to acquire a language after the age of 7, and why we fall off the language acquisition map after puberty (Source).

Kuhl points out that babies, unlike adults, can distinguish all the sounds of all languages, making them “citizens of the world.” To put that into perspective, it is good to remember that “the world’s languages contain approximately 600 consonants and 200 vowels” (Ladefoged, 2001, as cited in Kuhl, 2007)[2]. She also explains how babies transition from being citizens of the world to being language-bound listeners before the age of 1, and to culture-bound listeners as adults. In one of her theories, she posits that “babies listen intently and take statistics while adults are governed by representation in memory formed earlier in development” and probably that is why after the “critical age” learning of language material may slow down as “our distributions stabilize.”

[2] “Each language uses a unique set of about 40 distinct elements, phonemes, which change the meaning of a word (e.g. from bat to pat). But phonemes are actually groups of non-identical sounds, phonetic units, which are functionally equivalent in the language” (Kuhl, 2007).

 

Mapping Through Statistics

Kuhl (2000), asserts, based on the data, that infants map the sounds of a language—learning the distributional and probabilistic phonetic patterns contained in caregiver speech—in the first year of life before they can speak themselves.

Statistical differences in vowel sounds produced by caregivers speaking Japanese and English     (Source)

This leads them from being able to distinguish every phoneme in any language to becoming “language specialists” in the phonemes of their own. Also, in the evolution of a child’s language, acoustic differences that do not fit the mother tongue sound map, detected by the auditory perceptual processing mechanism,[3] strongly influence future perception. In other words, infant perception is altered—literally warped—by the sound map of their caregiver’s phonetic system that they internalize at about 10 months.[4] If there are no distinct “la”/”ra” sounds in the language, like Japanese, the infant will lose the ability to hear the difference. No speaker of any language perceives acoustic reality; in each case, perception is altered in the service of language.

[3] The developmental process is not a selectionist one, as Chomsky suggests, in which innately specified options are selected on the basis of experience; rather, a perceptual learning process commences with exposure to language, during which infants detect patterns, exploit statistical properties, affecting their language perception by that experience (Kuhl, 2000). In other words, it is a process based on growth and discovery (Kuhl, 2007).

[4] The Native Language Magnet theory, proposes that infants’ mapping of ambient language warps the acoustic dimensions underlying speech, producing a complex network, or filter, through which language is perceived.

 

A technique used to test if babies can detect the changes in the sound they hear  (Source)

The effect of exposure to the sounds of a language in the critical period of sound development (Source)  

As confirmed by research findings published in Kuhl et al. (1992), infants from both the United States and Taiwan showed a significantly stronger magnet effect for their native-language prototype compared to those of their non-native language.

“Infants demonstrate a capacity to learn simply by being exposed to language during the first half-year of life, before the time that they have uttered meaningful words. By 6 months of age, linguistic experience has resulted in language-specific phonetic prototypes that assist infants in organizing speech sounds into categories. They are in place when infants begin to acquire word meanings toward the end of the first year. Phonetic prototypes would thus appear to be fundamental perceptual cognitive building blocks rather than by-products of language acquisition” (Kuhl et al., 1992, p. 3). This mapping ability seems like a wonderful mechanism helping us learn a language. How about learning a second language? Can we still benefit from this amazing ability after the critical age?[5]

[5] One of the reasons it is called “critical” is the fact that it is not only about time; it is also about the Neural Commitment resulting from experience, as “adult bilinguals who acquire both languages early in life activate overlapping regions of the brain when processing the two languages, whereas those who learn the second language later in life activate two distinct regions of the brain for the two languages” (Kuhl, 2000).

Neural Commitment

The mapping process explained earlier is a blessing and a curse. This can be explained by studying the brain and Neural Commitment theory. According to Native Language Neural Commitment (NLNC) theory, the commitment to the patterns detected in the native language can foster language use in the future, but it comes with a price: it interferes with the acquisition of the new patterns of any second language one aspires to learn. As Kuhl (2004) puts it:

Exposure to language commits the brain’s neural circuitry to the properties of native-language speech, and that neural commitment has bi-directional effects—it increases learning for more complex patterns (such as words) that are compatible with the learned phonetic structure, and decreases the acquisition of nonnative patterns that do not match the learned schema.

So, how can we learn a second language better? Studies suggest that “the right kind of listening experience is enough, one that exaggerates the dimensions of foreign language contrasts, as well as providing listeners with multiple instances of a sound spoken by many talkers. The odd thing is that these studies show that feedback and reinforcement are not necessary in this process. Interestingly, the features shown to assist second-language learners—exaggerated acoustic cues, multiple instances by many talkers, and mass listening experience—are the same features that motherese[6] provides infants” (Kuhl, 2000), a sing-songy way for talking to babies that exaggerates certain phonemes (“You are so cuuute!”). Since we have learnt to decoded phonemes through exposure to caregiver speech, does that mean that we can do the same if the speech is provided by machines or through videos? Kuhl thinks otherwise: “It takes a human being for babies to take their statistics since the social brain controls when the babies are taking their stats.”

[6] When we talk to infants and children, we use a special speech register that has a unique acoustic signature, called “motherese” (or the more PC, “caregiverese”). Caretakers in most cultures use it when addressing infants and children. When compared to adult-directed speech, infant-directed speech is slower, has a higher average pitch, and contains exaggerated pitch contours (Kuhl, 2004).

Social Gating

Why is it then that a human being’s presence is crucial for babies to learn? “Infants learn phonetic information from a second language with live-person presentations, but not television or audio-only recordings” (Conboy et al., 2015). Although the idea that babies do not learn from television was refined in Kuhl’s later works, it is still fascinating to

know that “social interaction is essential for natural speech learning” (Kuhl, 2007). [7]

[7] “Studies show that young children learn new phonemes and words from humans significantly better than from machines. The study, conducted with 9-month-old infants, utilized a manipulation-touch screen video-which allowed infants to control presentations of foreign-language video clips tested the hypothesis that infant learning from a screen would be enhanced in the presence of a peer, as opposed to learning alone. Brain measures of phonetic learning and detailed analyses of interaction during learning confirm the hypothesis that social partners enhance learning, even from screens” (Lytle et al., 2o18).

 

Kuhl found that babies listening to live speakers had an impact but videos of those same speakers did not. (Source)

There is something to human interaction that cannot be created or duplicated by machines and video. “We argue that social contexts provide important information that is either non-existent or greatly reduced in nonsocial situations, such as the passive video viewing or the auditory-only presentations that fail to produce phonetic learning” (Kuhl et al., 2003, as cited in Conboy et al., 2015). It is backed by theory and data that the essence is having a sense of interaction and communication. It seems that “language learning relies on children’s appreciation of others’ communicative intentions, their sensitivity to joint visual attention, and their desire to imitate” (Baldwin, 1995; Brooks & Meltzoff, 2005; Bruner, 1983; Tomasello, 2003a, 2003b; Tomasello & Farrar, 1986, as cited in Kuhl, 2007).

This is where Kuhl offers the “social gating” hypothesis to explain why phonetic learning occurs when infants listen to complex, natural language during live interaction with tutors, but not from passive listening to TV input. According to this view, language learning is strongly heightened in social settings: When an infant hears speech while interacting with an adult, the infant focuses on linguistic input because the adult’s social-communicative intentions make the input salient and because humans increase social arousal (Kuhl, 2011, as cited in Conboy et al., 2015).

So Kuhl’s finding that babies could not learn language differences through videos raises a huge question about the way we teach language to older learners. Are the videos and CDs we use to model the language less effective that having a real person do so? Maybe this is something we should look into.

It’s All in the Brain, but Not Only Humans’!

How can the effect of “Social Gating” on future language ability—sound production—be explained by brain sciences? Kuhl and her colleagues suggested that “learning stems from the linkage between sensory and motor experience: sensory experience with a specific language establishes auditory patterns stored in memory that are unique to that language; these representations guide infants’ successive motor approximations until a match is achieved” (Kuhl & Meltzoff, 1996, as cited in Kuhl, 2007). So, it is all about listening and practicing until you hit that perfect note.

Amazing as it may seem, this process is not limited to humans, as “the process resembles that hypothesized in young birds in which a period of sensory learning of the song elements is followed by a period of motor ‘practice’ during which infant birds reproduce elements until they achieve a song that approximates the stored auditory template” (Doupe & Kuhl, 1999, as cited in Kuhl, 2007). This has also been hypothesized in species other than birds: “Work on ‘mirror neurons’ in nonhuman primates indicates a neural link between the self and other; seeing an action and producing it oneself are neurally equivalent in adult monkeys, and this ability plays a role in imitation and social understanding” (Meltzoff & Decety, 2003; Rizzolatti, 2005, as cited in Kuhl, 2007).

There is still, without a doubt, a lot more to explore on how we learn a language, to be more specific, the sounds of a language. As teachers, we are extremely optimistic about what the future holds. In the meantime, based on what we learn from the extraordinary work done by scientists like Kuhl, we can be more sensitive to and tolerant of our students’ pronunciation mistakes. It is not all their fault, after all. You cannot simply produce something that you cannot hear. But does it matter? If everyone else speaking your language can only hear the same phonetical system, does it matter that you cannot produce the native-speaker sounds that they cannot hear? Or are we tied to an unrealistic and unnecessary bias of native speakerism.

References

  • Conboy, B. T., Brooks, R., Meltzoff, A. N., & Kuhl, P. K. (2015). Social interaction in infants’ learning of second-language phonetics: An exploration of brain-behavior relations. Developmental Neuropsychology, 40(4), 216–229. https://doi.org/10.1080/87565641.2015.1014487

  • Kuhl P. K. (2000). A new view of language acquisition. Proceedings of the National Academy of Sciences of the United States of America, 97(22), 11850–11857. https://doi.org/10.1073/pnas.97.22.11850

  • Kuhl P. K. (2007). Is speech learning “gated” by the social brain? Developmental Science, 10(1), 110–120. https://doi.org/10.1111/j.1467-7687.2007.00572.x

  • Kuhl, P. K. (2004). Early language acquisition: Cracking the speech code. National Review of Neuroscience, 5, 831–843 (2004). https://doi.org/10.1038/nrn1533

  • Kuhl, P. K., Williams, K. A., Lacerda, F., Stevens, K. N., & Lindblom, B. (1992). Linguistic experience alters phonetic perception in infants by 6 months of age. Science, 255(5044), 606–608. https://doi.org/10.1126/science.1736364

  • Lytle, S. R., Garcia-Sierra, A., & Kuhl, P. K. (2018). Two are better than one: Infant language learning from video improves in the presence of peers. Proceedings of the National Academy of Sciences of the United States of America, 115(40), 9859–9866. https://doi.org/10.1073/pnas.1611621115

Mohammad Khari is an English lecturer at Ozyegin University, Istanbul. He holds a BA in English Literature, an MA in Philosophy of Art, and a CELTA. Mohammad has been reading and researching on the integration of neuroscience into pedagogy.

Leave a Reply

Your email address will not be published. Required fields are marked *