Vietnam.vn - Nền tảng quảng bá Việt Nam

ChatGPT Responds 'Yes or No' to Misdiagnosis, Simulating Empathy

CNET technology news site's Nelson Aguilar explains the questions that users should not trust ChatGPT's answers to.

VTC NewsVTC News10/07/2025

Below is the content of an article by reporter Nelson Aguilar, translated from technology news site CNET, about topics that users should not rely on ChatGPT to handle.

1. Diagnosis of health related problems

I used to enter my symptoms into ChatGPT out of curiosity, but the answers I got were sometimes a nightmare. As I scrolled through the list of possible diagnoses, the results ranged from dehydration to the flu to… cancer. I once had a lump in my breast and entered that information into ChatGPT. The result: The AI ​​thought I might have cancer.

But in fact, the doctor diagnosed me with a lipoma, a benign tumor that is quite common (occurring in 1/1,000 people).

I don’t deny that ChatGPT can be useful in the health space: AI can help you draft questions for your next appointment, translate confusing medical terminology, or create a timeline of symptoms so you’re more prepared. That could make doctor visits less stressful. But AI can’t do blood tests, it can’t do physical exams, and it can’t be covered by professional liability insurance. Understand its limitations.

2. Mental health care

ChatGPT can suggest some calming techniques, but it's clearly not going to be able to call you when you're in a real emotional crisis. I know people who use ChatGPT as "alternative therapy." CNET reporter Corin Cesaric has found it somewhat helpful in dealing with grief, as long as you keep the tool's limitations in mind.

But as someone who is currently in therapy with a therapist, I can confirm: ChatGPT is a pale imitation, and can be downright dangerous at times. ChatGPT has no life experience, can’t read your body language or tone of voice, and doesn’t possess any real empathy. ChatGPT is just simulating empathy.

Whereas a professional therapist operates under laws and codes of ethics designed to protect you. ChatGPT does not. Its advice can be biased, miss red flags, or unintentionally reinforce biases in its training data. Deep, sensitive, complex issues — leave them to humans.

Users should not completely trust ChatGPT's answers to health issues and emergency situations.

Users should not completely trust ChatGPT's answers to health issues and emergency situations.

3. Making decisions in emergency situations

If the CO alarm sounds, don't open ChatGPT to ask if you're in danger. Leave the house first, ask questions later. The AI ​​model can't smell gas, can't detect smoke, and won't call the fire department for you.

In an emergency, every second you type is a second you're not evacuating or calling 911 (the national emergency number in the US). ChatGPT only works based on what you type — and in an emergency, that information is often too little, too late. Use ChatGPT for post-accident explanation, not "first responder" service.

4. Personal financial or tax planning

ChatGPT can explain what an ETF is, but it doesn’t know how much debt you have, what tax bracket you’re in, what your filing status is, or what your retirement goals are. And the data it relies on may be outdated, especially with recent tax law updates or interest rate adjustments.

When it comes to real money, deadlines, and potential IRS penalties, call a professional, not an AI. Also note that any information you input into an AI may be stored and used to train the model.

ChatGPT has over 5 billion visits per month. (Source: explodingtopics)

ChatGPT has over 5 billion visits per month. (Source: explodingtopics)

5. Handling of confidential or controlled data

As a tech journalist, I receive a lot of press releases that have non-disclosure clauses. But I never thought to put those documents into ChatGPT to summarize or interpret them. Because if I did, the content would leave my control and end up on a third-party server, outside the legal constraints of the non-disclosure agreement.

The same risk applies to customer contracts, medical records, etc. This also includes tax returns, birth certificates, driver’s licenses, and passports. Once you enter sensitive information into ChatGPT, you can’t be sure where it’s stored, who can see it, and whether it’s being used to train future models. ChatGPT is also not immune to hackers or cybersecurity threats.

6. Doing something illegal

Of course!

7. Cheating in school

I won’t pretend to be moral—I used my iPod Touch to sneak a peek at some tricky math formulas on an AP Calculus exam in high school. But in terms of scale, AI has taken cheating to a whole new level.

Anti-plagiarism software like Turnitin is getting better at detecting AI-generated text. Suspension, expulsion, or revocation of degrees are all possible outcomes. ChatGPT should be a learning assistant, not a homework helper. If you let AI do all the work for you, you’re depriving yourself of the opportunity to learn.

8. Follow breaking news and information

Since OpenAI launched ChatGPT Search in late 2024 (open to all users from February 2025), the chatbot has been able to search for new websites, stock prices, exchange rates, sports scores, etc. with source links for you to check.

However, it does not update continuously in real time. Each refresh means you have to type a new command. So when you need instant information, prioritize live news, official news sites, press releases, or push notifications from apps.

9. Score prediction

I have seen ChatGPT “fabricate” information and give false data on player statistics, misreported injuries, and even win-loss records. ChatGPT cannot predict tomorrow’s scoreboard.

10. Draft a will or legal contract

ChatGPT is great for explaining basic concepts. But if you tell it to write legal documents, you're taking a big gamble because laws will be different in many places.

Let ChatGPT help you create a list of questions for your lawyer, then let a real lawyer draft a legally binding document.

11. Create art

This is just a personal opinion: I don’t believe AI should be used to create real art. I’m not against AI, I still use ChatGPT to brainstorm ideas or come up with titles. But it’s a tool to help, not a replacement. Use ChatGPT if you want, but don’t create art with AI and claim it as your own. That’s… annoying.

Ngoc Nguyen (Source: CNET)


Source: https://vtcnews.vn/chatgpt-tra-loi-co-cung-nhu-khong-khi-chan-doan-sai-benh-gia-lap-su-dong-cam-ar953726.html


Comment (0)

No data
No data
Magical scene on the 'upside down bowl' tea hill in Phu Tho
3 islands in the Central region are likened to Maldives, attracting tourists in the summer
Watch the sparkling Quy Nhon coastal city of Gia Lai at night
Image of terraced fields in Phu Tho, gently sloping, bright and beautiful like mirrors before the planting season
Z121 Factory is ready for the International Fireworks Final Night
Famous travel magazine praises Son Doong cave as 'the most magnificent on the planet'
Mysterious cave attracts Western tourists, likened to 'Phong Nha cave' in Thanh Hoa
Discover the poetic beauty of Vinh Hy Bay
How is the most expensive tea in Hanoi, priced at over 10 million VND/kg, processed?
Taste of the river region

Heritage

Figure

Business

No videos available

News

Political System

Local

Product