AI News

Chinese scientists find first evidence that AI could think like a human | South China Morning Post

In a groundbreaking discovery, Chinese scientists have presented the first evidence suggesting that artificial intelligence, specifically large language models (LLMs), can spontaneously dev…

Chinese scientists find first evidence that AI could think like a human | South China Morning Post

Jun 15, 2025

Chinese scientists find first evidence that AI could think like a human | South China Morning Post

In a groundbreaking discovery, Chinese scientists have presented the first evidence suggesting that artificial intelligence, specifically large language models (LLMs), can spontaneously dev…

In a groundbreaking discovery, Chinese scientists have presented the first evidence suggesting that artificial intelligence, specifically large language models (LLMs), can spontaneously develop a human-like system for understanding and categorizing natural objects. This finding offers crucial insights into the cognitive capabilities of AI models and contributes to the ongoing debate about whether these systems can truly emulate human thought processes.

The research team's work, published in the peer-reviewed journal Nature Machine Intelligence, indicates that LLMs, trained on extensive linguistic and multimodal data, can potentially create object representations that share fundamental similarities with human conceptual knowledge, a key aspect of human cognition.

The study's significance lies in its focus on object representation, a fundamental element of human cognition. The researchers investigated whether LLMs could independently develop a system for understanding and sorting natural objects in a manner analogous to humans. The positive findings suggest that AI models can move beyond mere data processing and begin to exhibit cognitive functions that reflect human thinking.

This research opens new avenues for exploring the potential of AI to mimic and understand human cognitive processes, which could have far-reaching implications for fields like artificial intelligence, cognitive science, and human-computer interaction. The research team's work highlights the potential of LLMs to evolve beyond their current capabilities.

LLMs are trained on massive datasets of text, and in the case of multimodal large language models (MLLMs), on visual and audio data as well. The study raises questions about whether these models can develop human-like object representations from linguistic and multimodal data. The research suggests that these models can spontaneously create a human-like system to comprehend and sort natural objects, a process considered a pillar of human cognition.

This discovery offers new evidence in the debate over the cognitive capacity of AI models, suggesting that artificial systems that reflect key aspects of human thinking may be possible.