The new method turns chatbots like ChatGPT into encrypted message carriers, by seamlessly embedding code into text that appears human-generated. The team says the technique is particularly useful in situations where conventional encryption mechanisms are easily detected or blocked.
Like a form of invisible digital ink, the real message will only be revealed when the recipient has the appropriate password or decryption key. The technology is expected to address common security holes in current encrypted communication systems, which are often vulnerable to hackers or exposed as “backdoors.”
However, the team also acknowledged that the technology could be misused for malicious purposes. The research results were published on April 11 on the arXiv preprint database, and have not yet undergone peer review.
This new technique could help journalists and citizens avoid oppressive surveillance systems. Photo: Yuichio Chino
“This is a very interesting study, but like any technology, the ethical aspects of its use — or misuse — need to be considered to determine the appropriate scope of application,” Mayank Raikwar, study co-author and a researcher in networks and distributed systems at the University of Oslo in Norway, told Live Science in an email.
The team built a system called EmbedderLLM, which uses an algorithm to insert secret messages into specific locations in AI-generated text—like hiding treasure along a road. The text then looks completely natural, like it was written by a human, and is undetectable by current decryption tools. The recipient uses another algorithm, which acts as a treasure map, to figure out where the hidden characters are and decrypt the message.
Messages created with EmbedderLLM can be sent across any messaging platform – from in-game chat apps to WhatsApp and other similar platforms.
“Using large language models (LLMs) for cryptographic purposes is technically feasible, but the effectiveness depends on the cryptography used,” Yumin Xia, chief technology officer at blockchain company Galxe, told Live Science in an email. “Although it depends on the specifics, the idea is generally feasible based on existing cryptography.”
However, one of the biggest weaknesses of this technique is the initial exchange of secure keys – necessary to encrypt and decrypt subsequent messages. The system can operate based on symmetric cryptography (sender and recipient share a secret key) or public-key cryptography (only the recipient has the private key).
According to the team, after the initial key exchange, EmbedderLLM will operate on a cryptographic platform that is resistant to any decryption technique – even with future advances in quantum computing. This helps ensure the new encryption method is durable and resilient in the long term.
The researchers believe the technology could be useful for journalists and citizens living under censorship. “We need to find important applications for this framework,” Raikwar said. “For repressed citizens, this technology offers a safer way to communicate without being detected.”
He also said the technology could help journalists and activists exchange information discreetly in places where there is heavy press surveillance.
However, despite impressive progress, experts say practical implementation of AI encryption techniques is still a long way off.
“Although some countries have introduced some restrictions, whether this framework can survive in the long term will depend on the need and the extent of its application in practice,” Xia said. “For now, this study is an interesting test case for a hypothetical scenario.”
Source: https://doanhnghiepvn.vn/cong-nghe/cac-nha-khoa-hoc-su-dung-ai-de-ma-hoa-thong-diep-bi-mat-ma-he-thong-an-ninh-mang-khong-the-nhin-thay/20250516111710551
Comment (0)