How to Visualise Weak Points in Listening in English

How to Visualise Weak Points in Listening in English

By: Marisa Ueda

Introduction: The first problem

Listening is considered a particularly difficult skill (Wilson, 2008, p. 12). Unlike reading, there seems to be so little that listeners can control, such as speed, the environment in which the listening is done, and many more things. Yes, listening in English is generally regarded as difficult.

In this article, first, two fundamental problems about listening in English, particularly in Japan, and then, how to visualise the weak points in listening in English will be discussed.

The first fundamental problem about listening in English in Japan is that few English teachers have been educated or trained in how to teach it. This is because the Ministry of Education Culture, Sports, Science and Technology has never provided a concrete curriculum for English as a foreign language (EFL) listening pedagogy. Consequently, most teachers of English in Japan are left alone to struggle to establish an EFL listening method by themselves.

In general, they normally stick to the “traditional” instruction method: They play an English audio file and provide the answers. Then, they “instruct” the students to listen to the audio as many times as possible until they fully understand it. Although handouts, such as an audio transcript, a translation into Japanese, a word list and possibly some grammar tips may be provided, basically the students are left on their own to figure out where their comprehension broke down, why, and how to overcome their problems (Ueda, 2015, p. 1).

Thus, it is usual for students to comprehend the content by reading the handouts but not by listening itself. This has been a “traditional” EFL teaching method for listening in Japan at least since the 1970s, for over 50 years. Ironically, the technology for recording and playing audio files has dramatically developed and improved over these last five decades but not the techniques for using them in the field of EFL listening in Japan.

The second problem and the evidence: The weakest point of Japanese learners of English in EFL listening

Recognising words in fluent speech is the basis of spoken language comprehension, and the development of automaticity of word recognition is considered to be a critical aspect of both first language and second language (L2) acquisition (Segalowits et al., 2008). In terms of auditory processing for L2 and EFL listeners, the fundamental goal of phonological processing is word recognition (Rost, 2014, p. 131).

As for the second fundamental problem, or the weakest point of Japanese learners of English in EFL listening, Ikemura (2003) indicates that the auditory recognition of words is one of the major problems at the speech perception level. He reports, from his study of Japanese EFL learners, that the percentage of words recognised whilst reading is 79%, yet auditory recognition of words crashes down to 26%. This evidence best illustrates that the most common problem of Japanese EFL learners when listening is that they can recognise words when reading but not when listening.

 

Fig. 1 The percentage of word recognition in EFL listening by Japanese learners of English

When we look at this in relation to Anderson’s (2010) Cognitive Psychology theory, we can see that the weakest point of Japanese EFL learners in listening lies at the level of perception. Anderson explains that language learning involves certain steps and proposes a cognitive framework of language comprehension based on perception, parsing, and utilisation, as shown in Figure 2. Although these three phases are interrelated, recursive, and possibly concurrent, they differ from one another. In listening, the lowest cognitive level is perception, when the listener decodes acoustic input, extracting phonemes from a continuous stream of speech.     

The second stage is parsing. In parsing, words are transformed into a mental representation of the combined meaning of the words. This occurs when a listener segments an utterance according to syntactic structures or meaning cues. According to Anderson (p. 366), people use the syntactic cues of word order and inflection to interpret a sentence.

The third and final stage is utilisation. In this stage, it is sometimes necessary for a listener to make different types of inferences to complete an interpretation of an utterance, especially since the actual meaning of an utterance is not always the same as what is stated. That is, to understand what a speaker means but not what the speaker says, a listener sometimes needs to make inferences and connections so that they can make the sentence more meaningful. For example, when I was a postgraduate student in England, one day an old professor asked me just when I entered a classroom, “Were you born in a barn?” I replied, “No, I was born at a hospital in Japan.” After the class, my British classmate kindly told me that the professor had not been asking whether I was born in a barn. The sentence implies that if a person was born in a barn, where there is usually no door, then they are unaware of the custom of closing a door after entering a room. Thus, the actual, ironic meaning of the sentence is: “Shut the door!” As this example shows, successful comprehension requires a finishing touch called utilisation, after the perception and parsing stages.

Fig. 2 Hierarchical Model from Anderson’s (2010) Cognitive Psychology Theory

Next, I will suggest a methodological plan for how to visualise the weak points in EFL listening, in order to solve the problems described above.

Plan: How to visualise weak points in listening in English

Fig. 3 An MRI image of the author’s brain

As in the medical field, it could be practical and helpful in the field of listening if what cannot be seen from the outside is made visible, like in an MRI. Can we help learners see the “inside” of listening that way?

One method to make this possible is dictation; learners listen to a passage and write down what they have heard. Oakeshott-Taylor (1977) argues that dictation tests assess performance at all stages of the speech perception process. The following is an example of a dictation assignment sheet designed by the author to locate and identify learners’ weak points in listening in English. Only content words, which are critical to the comprehension of meaning are deleted.

Fig. 4 A sample dictation sheet for beginners

Before designing a dictation sheet, it is useful and helpful to know the learners’ proficiency level in EFL listening. For beginners, a wide gap between words to be dictated is recommended (as in Figure 4), whilst a dictation sheet like the one below might be more appropriate for advanced learners.

Fig.5 A sample dictation sheet for advanced leaners

Prior to dictation, it is suggested that you tell students:

  • not to stop the audio at every word to be dictated
  • and to take notes while listening.

As for the number of times they should listen, I normally suggest up to five times, based on research results from Hori (2007, p. 129), though her research is about shadowing in listening: She reports that accuracy increases most between the first and fifth shadowing.

Fig. 6 A sample audio transcript from a textbook (Kisslinger, 2017) with instructions

After the dictation, the audio transcript is provided. Figure 6 is a sample audio transcript with instructions for the dictation shown in Figure 4.

Fig.7 A sample dictation sheet for intermediate learners based on an audio transcript from a textbook (Blass & Vargo, 2008)

First, students are instructed to compare their own dictation and the audio transcript to check any words that they could not catch. In the case of words which students could neither catch by listening nor recognise even when they are reading, students are instructed to circle them in red.

On the other hand, if there are words which students could not catch by listening but could have recognised if they had been reading, then, they are guided to highlight these words. Since these dictations are not intended to be tests of spelling, any mistakes regarding spelling should be ignored (Buck, 2001, p. 75). However, it would be beneficial for students to be made aware of their mistakes in spelling.

Fig. 8 An actual audio transcript from a text book (Blass & Vargo, 2008) after comparison

Once this procedure is completed to the end of the audio transcript, it is possible to diagnose why students could not hear these words. Next, I will explain not only these causes but also how to overcome them.

Cause 1: Vocabulary

The prime cause of words which students could neither catch by listening nor recognise even when reading is insufficient knowledge of vocabulary. Many Japanese learners of English claim that the speaker’s speed is a great issue in EFL listening, and yet many do not realise that the words which one cannot recognise in reading are not perceptible in listening either.

When explaining this fact to my class, I often read out the following sentence three times and then another three times at a very slow speed: My father is an orthodontist. No matter how many times I read it out and no matter how slowly I say it, no one can comprehend this one simple sentence since most Japanese first- and second-year university students do not know the word orthodontist. Their closest guess is “dentist.” Through this experiment, students come to realise that it is impossible to catch any words when listening unless they can recognise them when reading; that it is neither a question of speed nor how many times they listen, if they do not know the words when reading. Sufficient knowledge of vocabulary is a must to improve listening in English. I could just tell them that vocabulary is important in EFL listening, but they really understand its importance if I do this experiment. As the remedy for Cause 1, students are instructed to check the definitions, parts of speech (such as nouns, verbs, etc.), and the pronunciation of the words after comparing their own dictation and the audio transcript.

Cause 2: A gap between two information processing abilities

In this part, the second cause of words not being caught is described. This problem occurs because of a gap between two information processing abilities: auditory and visual. When words are recognised when read but not when heard, it means that the student’s visual information processing ability is superior to their auditory one and that there is a gap between the two. Whether the information is given in sounds or as written words, the meaning, of course, remains the same. Therefore, this gap needs to be closed to overcome this problem.

As a concrete procedure for doing this, students are instructed to listen to the audio file, at least three times, whilst staring at the words in the audio transcript they could not catch by listening. Then, as a final touch, students are guided to listen to the audio file another three times without looking at the transcript so they become able to recognise these words just by listening.

Causes 3 and 4: Grammar and logic

It is also crucial to teach students that listening cannot be done simply by catching sounds. Many other strategies, such as using grammar and logic, must be employed. Before assigning a dictation, all teachers would benefit from knowing that students tend to focus just on filling in the words or sounds missing from the blanks, ignoring the rest of the transcript. The example below (Figure 9) is an actual excerpt from a dictation sheet filled in by one of my students. This reveals that the student merely focused on catching the sounds of these missing words.

First, as evidence that this student did not employ any grammatical knowledge, “They are brightly color” is not grammatically correct. The verb should be coloured, using the passive voice as there is an “are” at the beginning of this sentence. If this student had checked what the student had written by looking at the whole sentence after the dictation, the student could have realised that “They are brightly color” is grammatically incorrect and might have been able to fix it to “coloured,” the correct answer.

Also, the word “temperatures” shows that no logic was being used. If this student had checked after the dictation, the student could have realised that “warn temperatures” does not make any sense and might have been able to change it to “predators,” the correct answer, assuming the student knows the word.

In this way, it is crucial to instruct students that listening is not only based on sounds but also on grammar and logic by using their mistakes. For advanced learners, it would be beneficial for them to know that the use of inference and background knowledge is another critical strategy to improve EFL listening. To activate their own metacognition in listening in English, it would be useful to teach them to read through all their sentences after finishing the dictation. Otherwise, students cannot see the wood for the trees.

Benefits

These diagnostic instructions regarding EFL listening, which are based on both theory and evidence, might be beneficial for teachers. The key points of these diagnostic instructions regarding EFL listening are that they make it possible:

  1. to visualise and locate where students’ weak points in EFL listening are;
  2. to diagnose the causes;
  3. to teach students how to overcome their problems based on an understanding of the causes;
  4. and to increase our self-confidence in EFL listening instruction.

At the same time, these techniques might also be beneficial to students since they become aware of:

  1. the exact locations of their weak points in EFL listening;
  2. the causes;
  3. how to overcome their weak points based on the causes;
  4. and how to improve their listening comprehension ability efficiently in a limited time.

Personally speaking, the greatest benefit of these diagnostic instruction techniques for EFL listening is that they motivate students more, since they know precisely where their weak points in EFL listening are, their causes, and what to do about them. This is not possible with the “traditional” method of teaching EFL listening.

References

  • Anderson, J. (2010). Cognitive psychology and its implications, 7th Edition. Freeman.

  • Blass, L., & Vargo, M. (2008). Pathways reading, writing, and critical thinking, 2nd Edition. National Geographic Learning.

  • Buck, G. (2001). Assessing listening. Cambridge University Press.

  • Hori, T. (2007). Exploring shadowing as a method of English pronunciation training [Unpublished doctoral dissertation]. Kwansei Gakuin University, Japan.

  • Ikemura, D. (2003, May). 音声聞き取り困難の克服をめざすリスニング指導:キーワードと文脈の 有効利用を考える [Listening instruction for acoustic problems: A consideration of effective usage of keywords and contexts] [paper]. Kansai Chapter, The Japan Association for Language Education & Technology, Osaka, Japan.

  • Kisslinger, E. (2017). Connect to the topic in Unit 2 Global English. In M. Rost (Ed.), Contemporary topics. 4th ed. Pearson Education.

  • Oakeshott-Taylor, J. (1977). Information redundancy, and listening comprehension. In R. Dirven (Ed.), Höverständnis im Fremdsprachenunterricht. Scriptor.

  • Rost, M. (2014). Teaching and researching listening. Saurabh.

  • Segalowitz, N., Trofimovich, P.., Gatbonton, E., & Sokolovskaya, A. (2008). Feeling affect in a second language: The role of word recognition automaticity. Mental Lexicon, 3, 47-71.

  • Ueda, M. (2015). Towards effective teaching methods in EFL listening for intermediate. Keisuisha .

  • Wilson, J. (2008). How to teach listening. Pearson Educational.

Marisa Ueda (Ph.D.) is a professor of English at Ritsumeikan University. She has mainly been educated in England. Her research interests include diagnostic pedagogical methods in EFL listening based on theories and evidence. Her research project has been funded by Japan Society for the Promotion of Science Kakenhi since 2017.

Leave a Reply

Your email address will not be published. Required fields are marked *