Your mind works differently when you use generative AI for a task compared to when you just use your brain. You’re less likely remember what you did. This is the seemingly obvious conclusion of an MIT research that examined how people think while writing an essay. It was one of the first scientific studies to examine how gen AI affects our brains.
The studyis a preprint, which has not been peer-reviewed. It is small (54 participants), but it indicates the need for further research into how tools like OpenAI’s ChatGPT affect how our brains work. OpenAI did not respond immediately to a request for comments on the research. (Disclosure – Ziff Davis, CNET parent company, filed a lawsuit in April against OpenAI alleging that it infringed Ziff Davis’ copyrights when training and operating their AI systems.
The findings show a significant difference in what happens in your brain and with your memory when you complete a task using an AI tool rather than when you do it with just your brain. But don’t read too much into those differences — this is just a glimpse at brain activity in the moment, not long-term evidence of changes in how your brain operates all the time, researchers said.
“We want to try to give some first steps in this direction and also encourage others to ask the question,” Nataliya Kosmyna, a researcher at MIT, and the lead author of this study, told me.
The rapid growth of AI tools such as chatbots are changing the way we work, find information and write. All of this happened so quickly that it is easy to forget that ChatGPT was first popularized at the end 2022, just a few short years ago. We’re only just beginning to see the research on how AI is affecting our lives.
This is what the MIT study revealed about what happens in the brains and future studies could tell us. This is your ChatGPT brain
The MIT researchers divided their 54 research participants in three groups and asked each group to write essays over a period of several weeks. One group had access to ChatGPT while another was permitted to use a search engine (Google) and the third was left with their own brains. The researchers analyzed the texts they produced, interviewed the subjects immediately after they wrote the essays, and recorded the participants’ brain activity using electroencephalography (EEG). The “brain-only” group produced essays that were more distinct, while the
group produced essays that were similar. Interviews conducted after the essays had been written revealed more interesting findings. The people who relied on their brains to remember and quote their writing were better than those who relied on search engines or LLMs.
Read More: AI Essentials – 29 Ways Our Experts Can Make Gen AI Work For You
Perhaps it is not surprising that those who rely more on LLMs and may have copied from the chatbot responses would be less able quote what they wrote. “written.” Kosmyna stated that these interviews were conducted immediately after the writing took place, and the lack in recall was notable. “You wrote it, didn’t you?” said she. “Aren’t you supposed to know what it was?”
The EEG test results also showed significant differences among the three groups. The brain-only group showed more neural connectivity – interaction between brain components – than the search engine group. The LLM group showed the least activity. Again, this is not a surprising conclusion. By using tools, you are using less of your mind to complete a particular task. Kosmyna, however, said that the research had helped to show the differences: “The idea was to look closer to understand that it’s different, but how is it different?” said she.
“weaker memory traces, reduced self-monitoring and fragmented authorship,” The study authors wrote. This can be problematic in a classroom environment. “If users rely heavily on AI tools, they may achieve superficial fluency but fail to internalize the knowledge or feel a sense of ownership over it.”
The researchers invited participants to return for a fourth essay session, where they were assigned a different group. The findings from a much smaller group of participants (just 18) showed that those in the brain-only groups initially showed more activity, even when using an LLM. However, those in the LLM only group showed less connectivity without the LLM compared to the initial brain-only groups.
It’s not ‘brainrot.’
Many headlines when the MIT study was published claimed that ChatGPT usage was “rotting” causing long-term or significant problems. Kosmyna explained that this is not what the researchers found. The study focused on brain activity that occurred while participants were working – their internal circuitry at the time. It also examined the memory of what they did at that time.
To understand the long-term effects, a longer-term research and different methods would be required. Kosmyna stated that future research could examine other gen AI uses cases, such as coding, or technology that examines the different parts of brain, such as functional magnetic resonance imaging (fMRI). She said.
Although the use of LLMs has not been fully researched, it’s likely that their effect on our minds isn’t as great as you might think. This was said by Genevieve Stein O’Brien, an assistant professor of neuroscience at Johns Hopkins University who wasn’t involved in the MIT research. She studies how biology and genetics help build and develop the brain, which occurs in early life. She said that these critical periods tend close during childhood and adolescence.
“All of this happens way before you ever interact with ChatGPT or anything like that,” Stein-O’Brien told me. Stein-O’Brien told me that the situation may be different for children who are more likely to come into contact with AI technologies. However, Stein O’Brien also said that studying children raises ethical issues for scientists who want to research human behaviour.
You can have a chatbot help you write an essay, but will you remember what you write?
Thai Liang Lim / Getty ImagesWhy care about essay writing anyway?
The idea of studying the effect of AI use on essay writing might sound pointless to some. After all, wasn’t the point of writing an essay in school to get a grade? Why not outsource that work to a machine that can do it, if not better, then more easily?
The MIT study gets to the point of the task: Writing an essay is about developing your thinking, about understanding the world around you.
“We start out with what we know when we begin writing, but in the act of writing, we end up framing the next questions and thinking about new ideas or new content to explore,” said Robert Cummings, a professor of writing and rhetoric at the University of Mississippi.
Cummings has done similar research on the way computer technologies affect how we write. One studyused sentence completion technology, also known as autocomplete. He asked 119 writers to write an essay. About half of the computers had Google Smart Compose installed, while the other half did not. Did it make them faster writers, or did it take more time to write because they had so many choices? The result was they wrote the same amount of words in the same period. “They weren’t writing in different sentence lengths, with different levels of complexity of ideas,” He told me. ChatGPT is a different animal. You still have control of the words and you still need to make choices when using a sentence-completion technology. In the MIT research, some participants simply copied and pasted what ChatGPT had said. Some participants may not have read the work that they submitted as their own.
“My personal opinion is that when students are using generative AI to replace their writing, they’re kind of surrendering, they’re not actively engaged in their project any longer,” Cummings said. The MIT researchers noticed something interesting during that fourth session. They noticed that the group that had written three essays without using tools had higher levels engagement when they finally gave tools. They wrote. “Such an approach may promote both immediate tool efficacy and lasting cognitive autonomy.”
Cummings stated that he had started teaching his composition classes without any devices. Students write by hand during class. They usually choose topics that are more personal, and therefore harder to incorporate into an LLM. He said that he does not feel like he is grading papers created by AI. His students get a chance to express themselves before they seek help from a tool. He said “I’m not going back,” .

