Vietnam.vn - Nền tảng quảng bá Việt Nam

ChatGPT, DeepSeek distort scientific data

AI models like ChatGPT and DeepSeek have been found to be prone to skewing scientific content when summarizing, especially in the medical field.

Báo Khoa học và Đời sốngBáo Khoa học và Đời sống08/07/2025

A group of researchers in Germany recently warned about the risk of bias when using AI chatbots to shorten research content. After analyzing 4,900 scientific abstracts written by humans, the team used several AI models to compare how these systems processed information. The results showed that most chatbots made the mistake of overgeneralizing, even when prompted to summarize accurately.

what-is-ai-chatbot.jpg

AI is prone to bias when summarizing scientific research.

In tests, AI models made five times more mistakes than human experts when not guided. Even when explicitly asked for accuracy, the error rate was twice as high as that of a standard summary. “Generalizations sometimes seem harmless, but they actually change the nature of the original research,” said one of the team. “It’s a systematic bias.”

Notably, newer versions of the chatbot have not only failed to fix the problem, but have actually exacerbated it. With their smooth and engaging language, AI-generated summaries can easily appear credible, while the actual content has been distorted. In one instance, DeepSeek changed the phrase “safe and can be successfully performed” to “safe and effective treatment” – a misinterpretation of the original study’s conclusions.

In another example, the Llama model recommended diabetes medication for young people without specifying dosage, frequency, or side effects. If the reader is a doctor or healthcare professional who does not verify the original research, summaries like this can pose a direct risk to patients.

Experts say the phenomenon is rooted in the way AI models are trained. Many chatbots today are trained on secondary data—like popular science news—that has already been simplified. As AI continues to summarize the shortened content, the risk of distortion increases.

Experts in AI in the field of mental health say that technical barriers need to be built early in the development and use of AI.

099393800-1681896363-glenn-carstens-peters-npxxwgq33zq-unsplash.jpg

Users need to be wary as chatbots can easily distort content.

As users increasingly rely on AI chatbots to learn about science, small errors in interpretation can quickly accumulate and spread, causing widespread misperceptions. At a time when trust in science is declining, this risk becomes even more worrying and deserves due attention.

The integration of AI into the research and dissemination of knowledge is an irreversible trend. However, experts affirm that technology cannot replace the role of humans in understanding and verifying scientific content. When using chatbots in high-risk areas such as medicine, accuracy should be the top priority, instead of focusing only on smooth language experience or response speed.


Source: https://khoahocdoisong.vn/chatgpt-deepseek-bop-meo-du-lieu-khoa-hoc-post1552971.html


Comment (0)

Please leave a comment to share your feelings!

Same category

People's Artist Xuan Bac was the "master of ceremonies" for 80 couples getting married together on Hoan Kiem Lake walking street.
Notre Dame Cathedral in Ho Chi Minh City is brightly lit to welcome Christmas 2025
Hanoi girls "dress up" beautifully for Christmas season
Brightened after the storm and flood, the Tet chrysanthemum village in Gia Lai hopes there will be no power outages to save the plants.

Same author

Heritage

Figure

Enterprise

Hanoi coffee shop causes a fever with its European-like Christmas scene

News

Political System

Destination

Product

Footer Banner Agribank
Footer Banner LPBank
Footer Banner MBBank
Footer Banner VNVC
Footer Banner Agribank
Footer Banner LPBank
Footer Banner MBBank
Footer Banner VNVC
Footer Banner Agribank
Footer Banner LPBank
Footer Banner MBBank
Footer Banner VNVC
Footer Banner Agribank
Footer Banner LPBank
Footer Banner MBBank
Footer Banner VNVC