Similarities and Differences between Predictive Language Processing in Human Brains and Machine Learning Systems such as GPT-3

Similarities and Differences between Predictive Language Processing in Human Brains and Machine Learning Systems such as GPT-3

By: Glenn Magee

What is predictive language processing?

Predictive language processing is how our brains predict the meaning of language we hear or read. The theory is that our brains constantly create mental models to help us predict things, such as the next word in a sentence, based on our past experiences and knowledge. That is important because we do these things in real-time without having to stop and consciously consider the meaning of every word or sentence we hear. Our brain processes are automatic and interact with other brain functions to quickly and efficiently grasp the underlying purposes of words, sentences, and texts. Many Artificial Intelligence (AI) systems also use predictive language processing to predict what word or words will come next in a sentence or text. Artificial intelligence aims to provide software that can explain output based on input. This is important because any AI system’s crux is the input we are giving it. In other words, the quality of the data or information fed into artificial intelligence systems determines the outcome.

How does Predictive Language Processing work?

Predictive language processing is about making inferences and predictions about new experiences or knowledge based on past information. For a deep dive into the topic, read through our Think Tank issue on Predictive Processing from October 2022. To illustrate with a simple example, take this sentence:

“I’m going to the _____ to buy some milk.”

In this sentence, most English speakers would predict words such as “store” or “shops.” In Japan, that next word might be expected as “konbini” (Japanese for convenience store). This simple example illustrates a fundamental concept in the theory of predictive language processing what we predict is influenced by prior experiences in a particular environment. American speakers of English would be more likely to insert the word store into the above sentence than British speakers, who would most likely insert shops. In the 1980s, this kind of predictive processing was introduced on phones to help users write messages more quickly.

Predictive text (commonly known as auto-correct) on phones is the function that, when we type a sentence, the computer can often predict the next word or words we will use and offer suggestions. Predictive text has become a lot more accurate since the 1980s as modern AI learns from our choices and offers us appropriate predictions in the future based on the inputs we have made. And that can be problematic for language learners if we teach our phones to “mispell”[1] certain words. Some computer systems will be able to detect such common errors and grammatical inconsistencies and help learners correct them. Others potentially could help reinforce those misspelled words. It depends on the language model and the data the system was trained on.

In technology, many language models could be used, and these models become more complex (and more expensive). The benefit of more advanced systems is that they can predict words at a greater range in a sentence. This is called a long-range dependency. For example, the subject “The woman” and the object “a nun” are far away from each other in the following sentence:

“The woman sitting next to me on the bus last night as I traveled home from work was a nun.”

Chat-GPT3, using the GPT-3 system, is an example of a system that analyzes and predicts long-range dependencies. It does this by utilizing a self-attention mechanism, which simply takes the current word and all the other words in a sentence to predict the most likely word that will come next. In other words, this is computer-generated context based on data the system has been trained on. For humans, this is context based on experience.

[1] as in this word

How does the predictive language process differ between humans and machines?

Understanding the similarities and differences between human and artificial intelligence predictive language processing will allow us to understand better their strengths and limitations and how they can be used together to enhance language information processing. Two differences that strike me as interesting are the difference in understanding of context based on content and the connection of that to our emotions.

"Context is King"
Glenn Magee
TT Author

The first area of difference between us and AI is context. Context is king in humans and machines because it involves interpreting content so that the message’s intended meaning or information is understood. For humans, that context is our life experience and interaction with the world, which is constantly being updated. For machines, context is limited to how much data is trained. And that has presented some problems, such as racist and sexist robots. Machines work on predicting the next word in a sentence by trying to understand that context based on the content they are fed. These machine-learning systems need help with semantic or contextual understanding, which is how humans make sense of the meaning and context of language utterances. For example, when read (or said) in a sarcastic tone, the following sentence is something that a computer system would have difficulty understanding.

“I became a teacher to be rich and famous.”

When I asked Chat GPT-3, the system replied that the meaning of the sentence was the teacher’s motivation was to attain wealth and fame. However, after asking it some questions about sarcasm and then asking about the meaning of the sentence, the system then replied it thought the sentence implied these were not the real reasons the person became a teacher. In a sense, GPT3 adjusts its answer to the input questions. Interesting! This does not mean that contextual understanding is not problematic for humans. We all can recall at least one example, if not more, of getting hold of the wrong end of the stick. When someone doesn’t understand something or makes an incorrect assumption, “getting hold of the wrong end of the stick” is a common expression. It reminds us how important it is to pay attention to details and understand the context to avoid mistakes.

The second area of difference relates to the first and deals with emotion. Extensively trained AI can spot sarcasm based on our inputs and interactions with the system when common words or phrases are used. AI struggles with mapping emotions to content but is getting very good at sentiment analysis. Sentiment analysis is a process of determining whether something is positive, negative, or neutral. For example, consider a student review of one of my classes that reads:

“The teaching was excellent, the content easy to understand, and I enjoyed myself in class.”

An AI system trained to use sentiment analysis could analyze the words and phrases in this student’s review and determine that the overall sentiment is optimistic based on using words like “excellent” and “I enjoyed myself.” This could be very useful for teachers collecting survey data to report on students’ feelings about classes. However, this will not negate the need for our review. For example, if a student wrote, “The teacher’s classes were very “interesting” each week,” it is likely to cause problems for AI as they do not know the particular student in the way a human teacher does. AI would most likely interpret the sentence as an indicator of a captivating and engaging class rather than a sarcastic comment.

What can we take away from this?

Predictive language processing is a crucial component of human communication, and it’s been replicated partially in artificial intelligence. Despite AI’s ability to predict words and context, these systems can only understand emotions and context so well, so they’re less helpful than they might seem when making predictions about things such as irony, sarcasm, or humor. Humans and machines have different strengths and limitations; we should use them together to enhance our language skills. It’s important to understand the difference between human and artificial predictive language processing when using chatbots or other AI systems to ensure the accuracy of the information we’re getting. Will AI such as ChatGPT do more harm than good? The debate has just begun.

Glenn Magee
is a lecturer at Aichi Prefectural University. He also studies for a Doctor of Education in TESOL (Ed.D.) through Anaheim University. In addition to researching positive learning environments, he loves gardening and growing flowers and vegetables.

Leave a Reply

Your email address will not be published. Required fields are marked *