A new study published in the scientific journal Proceedings of the National Academy of Sciences (PNAS) shows that while generative AI like ChatGPT can help workers complete tasks faster and more efficiently, it comes at a social cost: they are more likely to be viewed by others as lazy, incompetent, or incompetent.
The study, conducted by a team of social scientists, is based on attribution theory – the idea that people tend to interpret the actions of others based on their intrinsic motivations or abilities rather than the circumstances. For example, asking for help is sometimes seen as a sign of personal weakness. Similarly, using AI can be seen as a lack of ability or effort, rather than a valid tool for efficiency.

In four experiments involving more than 4,400 participants, the team found remarkable results. In the first experiment, participants imagined completing a task using AI or traditional tools, then predicted how they would be judged by their colleagues and managers. These people worried about being seen as lazy, replaceable, and less diligent. They said they did not want to publicize their use of AI.
A second experiment showed that this bias is indeed present. Participants were asked to rate a hypothetical employee: one who used AI, one who asked a coworker for help, and one who did it themselves. The AI users were seen as lazier, less competent, less diligent, and less independent than the other two groups. Remarkably, this perception was consistent regardless of the gender, age, or industry of the employee depicted.
A third experiment tested whether these biases influenced actual decisions. One group of participants acted as recruiters, selecting candidates based on whether or not the candidates used AI in their jobs. The group of recruiters unfamiliar with AI tended to select candidates who also did not use AI. Conversely, those who frequently used AI were more comfortable and tended to select candidates who did. This suggests that personal experience with the technology has a big impact on the raters’ attitudes.
In a final experiment, the team looked at whether specific job situations would alter this bias. When candidates applied for digital tasks, like composing emails, AI users were not perceived as lazy and were even rated higher. However, when candidates applied for manual tasks, like taking notes by hand, AI users were still rated lower. The results showed that the strongest biases occurred when AI users performed tasks that did not require AI.
Interestingly, participants who admitted to using AI regularly were less likely to judge others negatively, suggesting that familiarity and real-world experience can reduce stigma. Lead author Jessica Reif, a doctoral student at Duke University’s Fuqua School of Business, said she began the project after hearing from employees who expressed concerns about being judged when using AI, even though their work performance had not declined.
This study has methodological strengths such as carefully constructed control situations. However, the author also acknowledges limitations, such as the situations are hypothetical rather than actual observations, and the raters are not directly acquainted with the people being rated, which may be different in a real workplace environment.
These findings emphasize that, while AI is becoming a useful tool to help increase productivity, the use of AI also needs to consider how it affects personal image and reputation. The research results are a reminder: for AI to truly become an effective assistant, the working environment itself needs to change its perception, avoiding hastily labeling AI users as incompetent or lazy. And workers, in addition to knowing how to take advantage of technology, should also maintain transparency, while developing skills and personal imprints so as not to be "dissolved" into the image created by AI.
(According to PsyPost)

Source: https://vietnamnet.vn/su-dung-chatgpt-trong-cong-vic-bi-danh-gia-luoi-bieng-kem-nang-luc-2422059.html
Comment (0)