Vietnam.vn - Nền tảng quảng bá Việt Nam

New research: AI does not 'reason' like humans

Artificial intelligence (AI) programs have achieved impressive results, but new research suggests we shouldn't overstate the claim that AI can reason like humans.

Báo Tuổi TrẻBáo Tuổi Trẻ07/09/2025

trí tuệ nhân tạo - Ảnh 1.

Artificial intelligence programs have achieved many breakthroughs in recent years - Photo: REUTERS

We cannot observe the entire process from input data to output results of large language models (LLMs).

To make it easier to understand, scientists have used common terms like “reasoning” to describe how these programs work. They also say that the programs can “think,” “reason,” and “understand” the way humans do.

Exaggerating the capabilities of AI.

Over the past two years, many AI executives have used exaggerated language to hype up simple technical achievements, according to ZDNET on September 6th.

In September 2024, OpenAI announced that the o1 reasoning model "uses a chain of inference when solving problems, similar to the way humans think for a long time when faced with difficult questions."

However, AI scientists disagree. They argue that AI does not possess human-like intelligence.

A study on the arXiv database by a group of authors at Arizona State University (USA) has verified the reasoning ability of AI with a simple experiment.

The results showed that "inference by chain of thought is a fragile illusion", not a real logical mechanism, but just a sophisticated form of pattern matching.

The term “chain of thought” (CoT) allows AI to not only come up with a final answer but also present each step of the logical reasoning, as in the GPT-o1 or DeepSeek V1 models.

Nghiên cứu mới: AI không 'suy luận' như con người - Ảnh 2.

Illustration of the GPT-2 language model by OpenAI - Photo: ECHOCRAFTAI

Check out what AI actually does.

The research team stated that large-scale analyses showed that LLM tends to rely more on semantics and surface clues than on logical reasoning processes.

"LLM constructs superficial logic chains based on learned input associations, often failing on tasks that deviate from conventional reasoning methods or familiar patterns," the team explains.

To test the hypothesis that LLM was only matching patterns and not actually making inferences, the team trained GPT-2, an open-source model released by OpenAI in 2019.

The model was initially trained on very simple tasks on the 26 English letters, like reversing some letters, for example turning "APPLE" into "EAPPL". Then the team changed the task and asked GPT-2 to handle it.

The results showed that for tasks not included in the training data, GPT-2 could not accurately solve them using CoT.

Instead, the model attempts to apply learned tasks as closely as possible. Therefore, its "reasoning" may sound logical, but the results are often wrong.

The group concluded that one should not rely too much or blindly trust the LLM's answers, as they can produce "nonsense that sounds very convincing".

They also stressed the need to understand the true nature of AI, avoid hype, and stop promoting that AI has the ability to reason like humans.

Back to the topic
ANH THU

Source: https://tuoitre.vn/nghien-cuu-moi-ai-khong-suy-luan-nhu-con-nguoi-20250907152120294.htm


Comment (0)

Please leave a comment to share your feelings!

Same tag

Same category

Christmas entertainment spot causing a stir among young people in Ho Chi Minh City with a 7m pine tree
What's in the 100m alley that's causing a stir at Christmas?
Overwhelmed by the super wedding held for 7 days and nights in Phu Quoc
Ancient Costume Parade: A Hundred Flowers Joy

Same author

Heritage

Figure

Enterprise

Don Den – Thai Nguyen's new 'sky balcony' attracts young cloud hunters

News

Political System

Destination

Product