A new study published in the scientific journal Proceedings of the National Academy of Sciences (PNAS) shows that while generative AI like ChatGPT can help workers complete tasks faster and more efficiently, it comes at a social cost: they are more likely to be viewed by others as lazy, incompetent, or incompetent.
The study, conducted by a team of social scientists, is based on attribution theory – the idea that people tend to interpret the actions of others based on their intrinsic motivations or abilities rather than the circumstances. For example, asking for help is sometimes seen as a sign of personal weakness. Similarly, using AI can be seen as a lack of ability or effort, rather than a legitimate tool for increasing efficiency.

In four experiments involving more than 4,400 participants, the team found some striking results. In the first experiment, participants imagined completing a task using AI or traditional tools, and then predicted how they would be viewed by their coworkers and managers. Those who did were concerned about being seen as lazy, replaceable, and less diligent. They said they did not want to be public about their use of AI.
A second experiment showed that this bias is indeed true. Participants were asked to rate a hypothetical employee: one who used AI, one who asked a coworker for help, and one who did it themselves. The AI users were seen as more lazy, less competent, less diligent, and less independent than the other two groups. Remarkably, this perception was consistent regardless of the gender, age, or industry of the employee depicted.
A third experiment tested whether these biases influenced actual decisions. One group of participants acted as recruiters, selecting candidates based on whether or not the candidates used AI in their jobs. The recruiters who were unfamiliar with AI tended to select candidates who did not use AI. Conversely, those who frequently used AI were more comfortable with it and tended to select candidates who did. This suggests that personal experience with the technology has a big impact on raters’ attitudes.
In a final experiment, the team looked at whether specific job situations would change this bias. When candidates applied for digital tasks, such as composing emails, AI users were not perceived as lazy and were even rated higher. However, when candidates applied for manual tasks, such as taking notes by hand, AI users were still rated lower. The results showed that the strongest bias occurred when AI users performed tasks that did not require AI.
Interestingly, participants who reported using AI regularly were less likely to judge others who used it negatively, suggesting that familiarity and hands-on experience can reduce stigma. Lead author Jessica Reif, a doctoral student at Duke University’s Fuqua School of Business, said she began the project after hearing from employees who expressed concerns about being judged for using AI, even though their work performance had not declined.
This study has methodological strengths such as carefully constructed control situations. However, the authors also acknowledge limitations, such as the situations were hypothetical rather than actual observations, and the raters did not know the raters personally, which may be different in real workplace environments.
These findings emphasize that, while AI is becoming a useful tool to help increase productivity, the use of AI also needs to consider how it affects personal image and reputation. The research results are a reminder: for AI to truly become an effective assistant, the working environment itself needs to change its perception, avoiding hastily labeling AI users as incompetent or lazy. And workers, in addition to knowing how to take advantage of technology, should also maintain transparency, while developing skills and personal imprints so as not to "dissolve" into the image created by AI.
(According to PsyPost)

Source: https://vietnamnet.vn/su-dung-chatgpt-trong-cong-viec-bi-danh-gia-luoi-bieng-kem-nang-luc-2422059.html
Comment (0)