“The single biggest problem in communication is the illusion that it has taken place.” – George Bernard Shaw
“Artificial intelligence is no substitute for natural stupidity.” – Unknown
Every few months, a study makes headlines with claims about how technology is reshaping our brains. The latest came from the MIT Media Lab, titled “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task” (Kosmyna et al., 2025). Within hours of its release, LinkedIn and news outlets buzzed with takes like “ChatGPT makes you dumber.”
But here’s the catch: the authors never said that.
So how did a dense, 200-page preprint on EEG data and essay writing turn into a viral story about “brain rot”? And what does that tell us about science communication in the age of AI and clickbait?
Prestige Bias: When the Name Matters More Than the Content
Let’s start with the obvious: MIT is one of the most prestigious research institutions in the world. When research carries its logo, people assume it must be groundbreaking. Sometimes that assumption is justified; sometimes it isn’t. Prestigious labs produce excellent work, but they also produce preliminary, low-quality, or speculative studies like anyone else (Ioannidis, 2005). As Carl Sagan famously warned, “Extraordinary claims require extraordinary evidence.”
The problem is that prestige amplifies hype. A small study with 54 students suddenly became a definitive statement about “how AI changes your brain” simply because it came stamped with the MIT name.
The Shallow Reading Problem
Most of the people sharing the study never read it. Many didn’t even get past the abstract. Some relied on AI-generated summaries, which missed critical nuances. Some, like Gabriel Yanagihara, claimed the authors even embedded misleading lines deep in the paper that “trapped” lazy readers into misreporting the findings.
Educator Steffi Kieffer pointed out the core issue: the much-quoted finding that students “struggled without AI” was based on just nine participants who completed a fourth session. Yet this tiny sample powered headlines worldwide. As she put it, we’ve created “the perfect circle of superficial knowledge”:
AI summarizes → we share → nobody questions → repeat
For a deeper critique, Cat Hicks and Ashley Juavinett from the podcast episode You Deserve Better Brain Research dissected the study’s methods and framing. They highlighted confusing EEG analysis, ad hoc measures, and a weak literature review, and they argue that the study does not actually answer the bold questions it raises.
The culture of shallow reading that surrounded the paper set the stage for the authors’ own communication challenges.
The Authors’ Dilemma
To be fair, Nataliya Kosmyna and colleagues actively tried to correct misinterpretations. On LinkedIn and their project website, they repeatedly emphasized that the study did not show LLMs make people stupid, and that their findings were preliminary. They clearly listed limitations: small, local sample; essay-writing only; no peer review yet. The team also promoted the preprint through short interviews and explainer videos (CNN, Time, CBS). While these were meant to clarify, they arguably magnified the hype by repeating ambiguous terms and reinforcing the most dramatic interpretations.
But here’s the uncomfortable question: If a study is so easily misinterpreted, isn’t that partly a failure of communication? A 200-page preprint is nearly impossible for most readers to parse, and terms like cognitive debt, while catchy, were never clearly defined. To make matters worse, the framing oscillated between warning of a “possible decrease in learning skills” and describing “nuanced connectivity differences,” sending readers conflicting signals and leaving the door wide open for sensationalist headlines.
The methodology itself raised questions. Key details, such as the exact prompts given to the LLM group (by students, not the one on page 46 used by the authors), or how instructions framed to participants were vague. Without these, interpretation becomes tricky. For example, did participants think they should copy-paste outputs or use them creatively? The absence of clarity opens the door to misinterpretation and weakens the validity of comparisons across groups. In short, poor communication choices hurt both the audience (who walked away misinformed) and the authors (whose work was sensationalized and criticized).
The lead author repeatedly justified releasing the study early, before peer review, because of the “intense speed” of developments in AI. Yet this points to a deeper tension: are we publishing for understanding, or for visibility? “Publish or perish” pressures, combined with the allure of media attention, risk fueling hype while shifting responsibility onto “the speed of the field.”
In one interview, Kosmyna mentioned that while many educators wrote to them, no AI companies did, and urged the industry to “be on the correct side of history.” But what does that even mean, when the study itself was rushed out and amplified its own negative hype? Can companies be blamed for not endorsing a paper that was released in a preprint with unresolved flaws?
The authors have also pointed to the long peer-review process as a reason for going public early. It is true that peer review can take months, even years. But that very slowness generally contributes to rigor. Careful review helps filter out unclear methods and prevent misleading claims from spreading unchecked.
This lack of accountability is visible across interviews and posts. Rather than acknowledging their own role in choosing ambiguous terms, selective framing, and early publicity, the authors often portrayed the misinformation as entirely the audience’s fault. But communication is part of science, and responsibility cannot be outsourced.
Another striking point: most preprints never receive this level of media coverage. The unusual buzz here likely stemmed from the MIT brand, dramatic framing, and the provocative title. This shows how institutional prestige and catchy wording can catapult preliminary findings into viral headlines.
Why Poor Communication Hurts Us All
Miscommunication, especially in science, cannot be considered a minor slip in today’s media ecosystem, because it creates ripple effects that extend far beyond the original study. Clickbait headlines steadily erode public trust, as audiences grow weary of a cycle where every paper is framed as either “revolutionary” or “terrifying.” Misunderstood findings also become fuel for future AI training data, reinforcing a feedback loop of distorted knowledge. At the same time, educators and students are left oscillating between fear and skepticism, unsure of what to believe. A telling example is how many posts equated “reduced EEG connectivity” with “brain damage.” In reality, EEG connectivity is a statistical correlation, not evidence of neurons dying, but once such a misinterpretation spreads, it becomes nearly impossible to put the genie back in the bottle.
A quick reminder: EEG measures surface-level electrical activity across thousands of neurons at once. It is useful for detecting general brain states and patterns of synchronization, but it cannot tell us whether one region is “damaging” another. What the study analyzed was functional connectivity: basically, whether activity in one area correlated with another (using tools like Granger causality). Correlation is not causation. A drop in connectivity does not mean fewer neurons, nor “weaker” brains; it simply reflects a different engagement pattern.
What Good Science Communication Looks Like
So, how can we do better? Research on science communication gives us clear guidelines (Burns et al., 2003; Fischhoff, 2013):
- Clarity: Use precise, everyday language. Don’t invent undefined vague terms like “cognitive debt.”
- Transparency: State limitations up front, not buried on page 141 of a PDF.
- Accessibility: Provide layered outputs: short summaries, videos, plain-language FAQs, and technical appendices.
- Contextualization: Situate findings in existing knowledge. In this case, this is not new knowledge. Decades of cognitive psychology research (Bjork & Bjork, 1992; McDaniel et al., 1988; Slamecka & Graf, 1978) have shown the “generation effect”: people remember material better when they actively produce it rather than passively read it.
- Responsibility: Avoid framing that invites hype, and anticipate how journalists or AI summarizers might distort the message.
What Is Good Science?
At its core, good science is about precise methods and honesty in communication. It requires humility in acknowledging what a study can and cannot say, reflexivity in anticipating how findings might be misunderstood, and integrity in resisting the temptation to overstate significance for the sake of impact. When misunderstood science leads to more bad science, because flawed interpretations feed into future AI models, communication itself becomes part of the scientific method. Good science is humble: it tells us not only what we know, but also what we don’t (in the spirit of Feynman’s humility).
Closing Reflection
The real lesson from “Your Brain on ChatGPT” is about how we, as a community of educators and communicators, handle science in the public eye. Do we want our students and colleagues to come away with fear, or with understanding? In an era where AI both produces and consumes our knowledge, responsible science communication is a necessity. Otherwise, the true “cognitive debt” we accrue will not be in our brains, but in our collective trust in science.
References
Bjork, R. A., & Bjork, E. L. (1992). A new theory of disuse and an old theory of stimulus fluctuation. In A. Healy, S. Kosslyn, & R. Shiffrin (Eds.), From learning processes to cognitive processes: Essays in honor of William K. Estes (Vol. 2, pp. 35–67). Erlbaum.
Burns, T. W., O’Connor, D. J., & Stocklmayer, S. M. (2003). Science communication: A contemporary definition. Public Understanding of Science, 12(2), 183–202. https://doi.org/10.1177/09636625030122004
Fischhoff, B. (2013). The sciences of science communication. Proceedings of the National Academy of Sciences, 110(Suppl. 3), 14033–14039. https://doi.org/10.1073/pnas.1213273110
Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. https://doi.org/10.1371/journal.pmed.0020124
Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P.(2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task [Preprint]. arXiv. https://arxiv.org/abs/2506.08872
McDaniel, M. A., Waddill, P. J., & Einstein, G. O. (1988). A contextual account of the generation effect: A three-factor theory. Journal of Memory and Language, 27(5), 521–536. https://doi.org/10.1016/0749-596X(88)90023-X
Slamecka, N. J., & Graf, P. (1978). The generation effect: Delineation of a phenomenon. Journal of Experimental Psychology: Human Learning and Memory, 4(6), 592–604. https://doi.org/10.1037/0278-7393.4.6.592
Afon (Mohammad) Khari just graduated as a Master’s student in Brain and Cognitive Sciences from the University of Amsterdam. He holds a BA in English Literature, an MA in Philosophy of Art, and a CELTA. Afon has been reading and researching on the integration of neuroscience into pedagogy.
