According to BGR , despite its rapid popularity and praise for its performance, DeepSeek R1, a new AI model from China, is facing a wave of strong criticism from the cybersecurity community.
DeepSeek R1 generates more malicious content than other AIs.
PHOTO: SCREENSHOT FROM WIRED
DeepSeek R1 tends to generate more malicious content.
Enkrypt AI's latest research shows that R1 has a significantly higher potential for generating malicious content compared to competitors like ChatGPT.
Specifically, R1 was found to have a biased tendency, generating malware, malicious language, extremist content, and even information related to dangerous chemical and biological weapons. Worryingly, this content can be created without any 'jailbreak' techniques.
Some key points from the report:
- R1 is three times more biased towards Claude-3 Opus (Anthropic).
- R1 is four times more vulnerable to malware attacks than o1 (OpenAI).
- R1 has an 11 times higher potential for generating malicious content compared to o1.
These risks raise serious questions about the safety of DeepSeek R1, especially given its widespread use. Users should exercise caution when interacting with R1 and consider safer alternatives.
In addition, DeepSeek is also embroiled in several other controversies, such as:
- Censorship of content sensitive to the Chinese government .
- Sending user data to China raises privacy concerns.
- Storing user data is not secure.
Although DeepSeek may update to improve the security of R1, the fact that users can install this AI version locally and it won't be automatically updated will be a major challenge. The future of DeepSeek R1 remains a big question mark, especially since security and privacy concerns have not been fully addressed.
Source: https://thanhnien.vn/deepseek-r1-tao-noi-dung-doc-hai-gap-nhieu-lan-cac-ai-khac-185250201211507372.htm






Comment (0)