AI works best when given problems that have clear processes and fixed formulas. But when dealing with situations that require subjective judgment or involve personal preferences, AI can make human-like mistakes - Photo: AI
A new study published in the journal Manufacturing & Service Operations Management by scientists at five universities in Canada and Australia evaluated the behavior of ChatGPT (specifically OpenAI's two models GPT-3.5 and GPT-4) on 18 prominent cognitive biases commonly found in human psychology such as: confirmation bias, endowment effect, sunk cost fallacy, certainty psychology...
The results showed that in nearly half of the tested situations, ChatGPT behaved exactly like a human when faced with irrational or emotional choices, despite the system's reputation for consistent logical reasoning.
Is AI really as "impartial" as expected?
Dr. Yang Chen, associate professor of operations management at Ivey Business School (Canada) and lead author of the study, commented: "AI works best when assigned to solve problems with clear processes and fixed formulas. But when dealing with situations that require subjective judgment or involve personal preferences, AI can make mistakes like humans."
The research team incorporated familiar hypothetical situations in psychology into ChatGPT, adding practical contexts such as inventory management or negotiating with suppliers.
It is worth noting that AI models still exhibit cognitive bias even when the question context is transformed from abstract to business reality.
GPT-4 is smarter, but not perfect
GPT-4, an improved version of GPT-3.5, excels at solving logic or probability problems. However, in subjective simulations, such as choosing a risky option to increase profits, GPT-4 shows a tendency to behave more emotionally than humans.
In particular, the study noted that GPT-4 consistently gave biased responses in confirmation bias testing situations, and was more susceptible to the "hot-hand fallacy"—the tendency to believe that random events will repeat in a series—than GPT-3.5.
In contrast, AI is able to avoid some of the biases that humans often make, such as base-rate neglect or the sunk cost fallacy.
The reason why ChatGPT exhibits human-like bias comes from the training data itself, which is full of our own biased behaviors and thinking - Photo: AI
The Origin of AI Bias: From Human Data
According to the team, the reason ChatGPT exhibits human-like biases comes from the training data itself, which is full of our own biased behaviors and thinking. The fact that the AI is fine-tuned based on human feedback further reinforces this bias, as the model is “rewarded” for responses that seem reasonable, rather than being completely accurate.
“If you want accurate and unbiased results, use AI for tasks that you already trust a computer to do well,” Dr. Chen recommends. “But if you’re dealing with strategic or emotional issues, humans still need to monitor and intervene, even if it’s just by rephrasing the question.”
"AI should be treated as an employee with important decision-making power, which means it needs to be monitored and ethically adhered to. Otherwise, we are inadvertently automating faulty thinking instead of improving it," said co-author Meena Andiappan, associate professor of human resources and management at McMaster University (Canada).
Source: https://tuoitre.vn/ai-cung-mac-sai-lam-phi-ly-va-thien-vi-nhu-con-nguoi-20250505103652783.htm
Comment (0)