Hesgoal || TOTALSPORTEK|| F1 STREAMS || SOCCER STREAMS

AI language model shows brain patterns similar to human aphasia

New research from the University of Tokyo reveals striking similarities between large language model (LLMS) process information and brain activity patterns in people with certain types of aphasia. The study, published on May 14 in Advanced Science, may improve AI systems and better language diagnostic tools.

The researchers used complex “energy landscape analysis” to compare internal processing patterns in AI systems such as Albert, GPT-2 and Llama with brain scans in patients with various aphasia – language disorders usually caused by stroke or brain damage.

When fluent pronunciation lacks accuracy

Many AI chatbot users have noticed how these systems produce self-confidence, expressive responses that sound authoritative but contain factual errors or fabricated information, a phenomenon informally called “illusion.”

What draws researchers attention is how this behavior is similar to a specific human language disorder, especially in receiving aphasia.

“You can’t notice how certain AI systems show expression while still having major errors,” said Professor Takamitsu of the International Center for Neurological Intelligence Research at the University of Tokyo. “But the similarities between my team and my behaviors that have been similar to those of Warnik’s aphasia, in between these people who say fluent, but don’t always make sense.”

This observation led Watanabe and colleagues to study whether internal mechanisms may have similarities at a deeper level.

Mapping internal dynamics

The researchers used energy landscape analysis, a technology originally developed by physicists but suitable for neuroscience to visualize how information flows into AI systems and the human brain.

Their analysis shows that large language models exhibit very similar patterns to those seen in patients receiving aphasia, especially Wernicke’s aphasia. Both show what researchers call “polarized distribution” in a way that is in the transition of information between different states.

To explain this complex concept, Watanabe offers an analogy: “You can imagine that the energy landscape is a surface with a ball. When there is a curve, the ball may roll down and rest, but when the curve becomes shallow, the ball may fall into chaos,” he said. “In aphasia, the ball represents the brain state of the person. In LLMS, it represents the persistent signal pattern in the model based on the indications in the model and the internal dataset.”

Major findings from the study

  • Both LLM and people with receptive aphasia show a “bimodal distribution” in transition frequency and in residence time of network activity
  • Four different LLMs (Albert, GPT-2, Llama-3.1 and a Japanese variant) all show patterns similar to those of receiving aphasia
  • The abnormal increase in the “Gini coefficient” found in receptive aphasia is related to poor understanding
  • Different types of aphasia can be classified according to internal network dynamics

Impact on AI development and medical diagnosis

This study opens fascinating possibilities in AI development and medical diagnosis. For neuroscience, it suggests new ways to classify speech disorders based on internal brain activity rather than external symptoms.

For AI developers, these findings can provide valuable insights that language models sometimes produce inaccurate information, albeit confidently. Understanding these similarities may lead to more reliable AI systems.

How will this study improve our future interaction with AI? Can it help us better understand and possibly treat human language disorders?

Researchers warn against making too many direct comparisons between AI systems and human brain diseases. “We’re not saying chatbots are subject to brain damage,” Watanabe said. “But they may be locked in a rigid internal model that limits their ability to flexibly borrow stored knowledge, just like they are accepting aphasia. Whether future models can overcome this limitation remains to be seen, but understanding these internal similarities may be the first step toward a smarter, smarter, and trustworthy AI.”

Research methods and limitations

The study analyzed resting state fMRI data from stroke patients from four different types of aphasia, comparing them with stroke patients without aphasia and healthy controls.

For AI systems, the team examined internal network activity in several large language models that are publicly available after responding to the input response.

The researchers acknowledge their research limitations, including the relatively small sample size of some blind types, and that they can only test smaller language models rather than the largest, state-of-the-art systems of GPT-4 (such as GPT-4).

Despite these limitations, the study proposes a new approach to understanding human language barriers and artificial intelligence systems. As we rely more on AI for information and help, it is becoming increasingly important to understand the similarities and differences between human and artificial cognition.

The study was supported by multiple grants from Japan’s Promotion of Science, the University of Tokyo and several other Japanese research institutions.

Fuel Independent Scientific Report: Make a difference today

If our report has been informed or inspired, please consider donating. No matter how big or small, every contribution allows us to continue to deliver accurate, engaging and trustworthy scientific and medical news. Independent news takes time, energy and resources – your support ensures that we can continue to reveal the stories that matter most to you.

Join us to make knowledge accessible and impactful. Thank you for standing with us!

You may also like...