Clutching precious books in her arms, a woman reaches the edge of the cliff and stops. As the howling gale whips back her hair and dress, the woman gazes resolutely at the dark, stormy landscape ahead. Her thoughts circle around like birds of prey ready to strike. Should she step off the precipice? Is what awaits her below worth the plunge into the unknown? Can she still hold on to her values in that potentially uncomfortable future?
Eastman Johnson’s painting captures what many teachers feel as we’re buffeted with tech advances that impact education in profound ways. Generative AI is just one tech innovation in a long line of them. It sure seems like every time I stop to catch my breath, some new generative AI tool is being touted as the next best thing in education. When a frigid blast of tech wind hits my face, I feel like turning around to head back to my snug, warm cottage. But we can’t turn our backs on change. AI is here to stay, and at the very least, we need to learn more about AI to decide to what extent–if at all–we integrate AI into our language classes. And if we choose to cast ourselves off the AI cliff, we should remember that we’re still clutching our knowledge and skills, which will help us venture into an uncertain future.
Pre-plunge considerations
But let’s not get ahead of ourselves here. Before integrating AI in your classes, ask yourself some fundamental questions:
- Support for teachers: Does your institution provide the support teachers need to integrate AI into teaching and learning? Verify that your school offers technical support, like Internet access, devices for students and teachers to work on, etc. But support also includes professional development opportunities for teachers. Are teachers given the necessary time to acquaint themselves with AI tools and continually update their AI skills? Are professional development sessions on AI being offered onsite? Alternatively, are teachers allowed to participate in external professional development sessions on AI?
- AI literacy: Do you have basic AI literacy? Basic AI literacy includes being aware of the potential advantages and disadvantages of using AI for educational purposes. The resource collection Generative AI in World Languages on University of Virginia’s Teaching Hub is an excellent starting point for language teachers. In particular, Kate Grovergrys’ Padlet Fostering AI Literacy for Career Readiness in the Language Classroom provides resources that help teachers navigate different ways AI can support language teaching and learning, ethical considerations, academic integrity, and much more.
- Equity: Do your students have equal access to the AI tool(s) they will be using? Check that your school gives students access to AI tools you’d like students to use. If you are considering assigning homework requiring AI, ensure that your students are able to access the Internet at home and have devices to work on there.
- Language learning goals: What do you want learners to be able to do in the target language by the time they finish your course? Equally important, what do your students want to be able to do? Keep these overarching goals in mind when deciding whether to integrate AI into a particular course. If having students use AI does not support your goals or your students’ goals, then it’s probably best not to ask students to use AI in your language learning activities.
- Learners: Is working with AI appropriate for your group of learners? How mature are your students? AI may not be appropriate or even allowed for younger students. Anyone working with children should check the school’s policy before integrating AI activities into their classrooms. In addition, consider your students’ language level. AI may be less helpful or even unhelpful for learners at lower levels.
- Future skills: Will your students likely use AI in the future when interacting with other people in the target language? This might be a difficult question to answer as technology is rapidly developing, and we don’t know what the future will bring. If you think students might be using AI as support when communicating with others, it’s worth considering teaching them helpful ways they can use this technology. Again, it will depend on your answer to the previous question. It might not make sense to integrate AI into your classes when your learners are young children or just beginning to learn the language.
- AI output: Will the AI’s output support your students’ learning? This requires you to experiment with the AI tool(s) that your students will work with. Consider what task(s) you want students to do, and then try out the task(s) as though you were a language learner and then critically analyze the AI’s output. If you were the learner, would the AI’s output be helpful for you? Is the output accurate in linguistic aspects (grammar, word choice, punctuation, etc.)? To what extent does the output exhibit bias? Is the output culturally relevant and accurate?
Ready to take the AI plunge?
If you can answer “yes” to the above questions, then it might make sense to step off the AI cliff and integrate AI into your language classes. Notice I say “might.” There is no shame in deciding against using AI in your language course.
But let’s say you’ve decided to try out an AI language learning task with your students. They can use AI to support their learning in many different ways, for example, to work on grammar or vocabulary, practice their speaking skills, or develop their writing skills. Unfortunately, there are still some ocean rocks you’ll want to avoid falling on when you leap off the AI cliff. Picture the following classroom scene.
You’re in class working with university students who are using AI in a language learning task. Your students have already written a text in the target language, and as the next step, you want them to prompt the AI model to give them feedback on their writing. You’ve given them the task, made sure they understand what they’re to do, and then told them to go ahead and prompt the AI.
The student’s eyes are glued to their device monitors. At first, they silently and eagerly alternate between typing and reading. But after a couple of minutes, the class atmosphere shifts slightly but noticeably. Hunched listlessly over their devices, your students’ eyes start to glaze over, and you fear they’ll nod off any moment now. You throw open a window, hoping fresh air will reinvigorate everyone, and wonder what’s gone wrong.
Your students have drifted into AI apathy mode. What initially was an intriguing task that sparked students’ curiosity waned rather quickly. Why?
Well, if we don’t design AI language learning tasks carefully, a couple of problems can crop up.
First, the AI tends to take over and become an overly helpful, overconfident tutor. When students type in their text and prompt the AI to give feedback on their writing, AI generates extensive feedback on different aspects of the students’ writing. Sometimes the AI’s feedback is far longer than the student’s text. As teachers all too well know, too much feedback discourages students and can easily cause them to throw in the towel. The kind of feedback a learner needs depends on what the learner can already do and what the learner needs to know in order to develop further in the target language.
Here, we might easily fall into the trap of thinking that we could use AI to solve this problem. Some educators believe AI can give each learner exactly the personalized feedback they need and thus support each learner individually (link). Certainly, AI can give learners instant individual feedback, but the feedback generated by AI is still not as good as feedback given by a human. AI neither thinks nor experiences human writing and speaking in the same way a human does. Furthermore, in the recent past, money spent on developing personalized learning technology has not provided the hoped for results (link). Maha Bali points out an even more worrisome aspect when she asks, “With technologies that are marketed as personalized learning, what kind of loss of human agency occurs when the machine decides the learning path, rather than the teacher or even the learner?” (2025). So, I caution teachers against trusting AI to give learners exactly the feedback they need or want. Instead, teachers need to examine AI feedback critically and help learners do so as well.
Second, in the classroom scenario above, each student is interacting with the AI individually–a purely human-to-AI interaction. One of our key needs as a species is interacting with one another, and language is an important way we do so. As neuroscientist Laura Gwilliams notes:
Why do we have language? Or if we weren’t able to use language, what would be lost? . . . social interaction. A lot of the time when we talk to one another, it’s not actually to exchange these crucially important information. Sometimes it is, but a lot of the time it’s just because that’s how you connect with other people, and that isn’t something that is kind of baked into these large language models, at least not yet. (2025, emphasis added)
Even though generative AI generates human-like responses, it is not a human, and interacting with it palls rather quickly.
Third, the feedback AI gives on a student’s language will inevitably be influenced by the data used to train it. Roughly 90% of the training data for AI systems is in English, specifically mainstream American English (Louro, 2025). This is because the US has played a huge role in developing the Internet and other digital technologies, and many dominant tech companies are located in the US. So, the linguistic norms baked into the products of these tech companies tend to mirror mainstream American English. As a result, other varieties of English are “ignored, misinterpreted or outright ‘corrected'” by AI models (Louro, 2025). This is the case for both written interaction with tools like ChatGPT and spoken interaction with automatic speech recognition systems like Amazon’s Alexa. Automatic speech recognition systems have a tendency to give up or make wrong guesses when responding to people who have different accents than what these systems were trained on (Agudo, 2025).
However, there are many varieties of English, not only in the United States but across the world. If a student doesn’t want to learn mainstream American English, the AI’s output may annoy and frustrate the student. When we look at other languages, this problem may even be exacerbated (Robinet, 2025). If AI models in other languages are trained on less data than English-language models, they will likely be more prone to problems of giving incorrect language information, only focusing on the dominant language variety, and generating bias in their output.
Landing on our feet . . . but wobbling
From a language learner perspective, the problems boil down to a lack of human-to-human interaction and problematic feedback from AI. Language teachers can address these problems by giving students opportunities to work together as they experiment with AI as well as helping students learn to evaluate the AI’s output critically. Here is a basic strategy that I have found to work fairly well with university students learning English at the B2 and C1 level.1
- Students produce a short text without using AI. They can write their text on their own or collaboratively in pairs or small groups.
- Students exchange their texts and give each other feedback. They read their classmates’ work from a human reader perspective and look for aspects they like about the text and aspects they find confusing. They also give suggestions for improving any confusing points.
- Students input their own texts into our university’s AI. They prompt the AI to focus only specific linguistic aspects step by step:
- Grammar: Only correct the grammar in this text. Do not make any further changes.
- Word Choice: Only make word choice suggestions in this text. Do not make any further changes.
- Punctuation: Only correct the punctuation in this text. Do not make any further changes.
- Students critically examine their peers’ comments and the AI’s suggestions. First, they individually reflect on the differences between receiving feedback from a peer and receiving feedback from the AI. Then, they get together with a partner and talk about the experience and what they find are advantages and disadvantages of human feedback vs. AI feedback.
- As a full class, we talk about the experience. After listening to my students’ thoughts, I bring in my own perspective. I remind them that they are the authors of their own texts. And as the authors, they decide what feedback–from classmates and/or AI–they want to take. In the next step, as homework, students reexamine their original texts and the feedback they received and revise their texts. It’s my hope that they retain ownership of their writing as they do so.
1 For an in-depth explanation of a similar strategy some college writing instructors in California are using, I highly recommend watching Anna Mills and Julie Gamberg’s webinar: link.
What this strategy doesn’t do is solve the problem of certain language varieties dominating AI models, which means the plunge off the AI cliff ends in a wobbly landing on soft sand. However, the strategy does embrace human-to-human interaction, and happily, my students and I don’t generally succumb to AI apathy. It also nudges students to think critically about AI output and realize that the AI’s language suggestions are not always better than what they can produce on their own.
After listening to a recent podcast episode on centering student exemplars, I’ve also begun to look for examples of “beautiful language” in their writing (link). I point out to the class what specifically is beautiful in the examples and hang up their work on the classroom wall. In this way, I hope students will see their own writing as beautiful and value the unique ways they express their ideas in the target language.
In a storm-wracked AI landscape, any landing you walk away from is a good landing. So, I guess I can live with a wobbly landing if my students dance on the beach afterwards.
References
Agudo, R. R. (2025, January 27). ‘Sorry, I didn’t get that’: AI misunderstands some people’s words more than others. The Conversation. https://theconversation.com/sorry-i-didnt-get-that-ai-misunderstands-some-peoples-words-more-than-others-239281
Bali, M. (2025, January 8). Five questions to ask before adopting a new technology. LSE Higher Education Blog. https://blogs.lse.ac.uk/highereducation/2025/01/08/five-questions-to-ask-before-adopting-a-new-technology/
Gonzalas, J. (2025, August 31). (host). The power of centering student exemplars. In Cult of Pedagogy. https://www.cultofpedagogy.com/student-exemplars/
Louro, C. R. (2025, May 5). AI systems are built on English–but not the kind most of the world speaks. The Conversation. https://theconversation.com/ai-systems-are-built-on-english-but-not-the-kind-most-of-the-world-speaks-249710
Mills, A. & Gamberg, J. (2025, July 24). AI Feedback in a Human-Centered Writing Process: The Peer and AI Review + Reflection (PAIRR) Approach [webinar]. Equity Unbound MYFest25. https://myfest.equityunbound.org/events/ai-feedback-in-a-human-centered-writing-process-the-peer-and-ai-review-reflection-pairr-approach/
Robinet, F. (2025, March 23) Mind your language: The battle for linguistic diversity in AI. United Nations. https://news.un.org/en/story/2025/03/1161406
Ralph, M., & Woodruff, L. (hosts). (2024, November 12). The artificial intelligence (AI) episode. In Two Pint PLC–Personal and Professional Education Podcast. https://twopintplc.com/podcast-episode/093-the-artificial-intelligence-ai-episode/
University of Virginia Teaching Hub. (2025, May). Generative AI in World Languages. https://teaching.virginia.edu/collections/generative-ai-in-world-languages
Weiler, N. (host). (2025, April 17). What ChatGPT understands: Large language models and the neuroscience of meaning. In From Our Neurons to Yours. Wu Tsai Neurosciences Institute Stanford University. https://neuroscience.stanford.edu/news/what-chatgpt-understands-large-language-models-and-neuroscience-meaning
Young, J. (host). (2025, September 2). What If College Teaching Was Redesigned With AI In Mind?. In Learning Curve. https://learningcurve.fm/episodes/what-if-college-teaching-was-redesigned-with-ai-in-mind
Heather Kretschmer has been teaching English for over 20 years, primarily in Germany. She earned degrees in German (BA & MA) and TESL (MA) from Bowling Green State University in Ohio. Currently she teaches Intermediate English and Business English at the Georg-August-Universität Göttingen, Germany.
