Even tech experts can fall into traps.
According to Australian cybersecurity experts, hackers are using ChatGPT to create phishing emails. However, the most dangerous aspect is that this type of phishing is very difficult for even tech experts to detect because phishing emails use ChatGPT or similar machine learning models, making them hard to distinguish from legitimate emails. The real danger is that hackers use these emails to trick users into clicking on seemingly harmless documents or links to websites, thereby spreading malware to users' devices.

Earlier, in early March 2023, the British cybersecurity company Darktrace warned that the AI-powered chat tool ChatGPT may have increased the sophistication of email scams, allowing hackers to conduct targeted attacks with a greater probability of success, using more sophisticated and complex deceptive tactics.
Also related to this issue, Sergey Shykevich, a ChatGPT researcher at Checkpoint Security, said his team began researching the possibility of AI assisting criminals in December 2021. By exploiting the large language model of artificial intelligence, criminals can easily create phishing emails and malware.
Justin Fier, director of cybersecurity company Cyber Intelligence & Analytics, asserts that ChatGPT can help malicious actors create dozens of phishing emails, easily and precisely targeting specific recipients with simple conditions.
Because of this danger, the only way to prevent it, according to experts, is for users to be more cautious when opening emails, especially those with attached links or executable files.
ChatGPT aiding and abetting criminals - a likely scenario.
"In the 20 years of shaping the development of the internet space, we could never have imagined that a consumer internet application like ChatGPT would grow at such a phenomenal rate. For comparison, TikTok took 9 months to reach 100 million monthly users, and Instagram took about 2.5 years," a study once asserted, witnessing ChatGPT's rocket-fast rise .
But in a relatively short time, the reality showed that ChatGPT was not only "amazing and will change our world ," but as Russell Reeder, CEO of Netrix Global, stated, "This technology will soon be uncontrollable," and that "there needs to be a force dedicated to controlling this rapidly emerging trend."
The reason for the "need to control" ChatGPT is actually quite simple. In early February 2023, cybersecurity company Blackberry published a survey of 1,500 IT professionals, in which 74% said they were concerned about ChatGPT aiding criminals, and 71% believed that OpenAI's application had been used for hacking and fraud.
Shishir Singh, Blackberry's Chief Technology Officer, even issued a statement asserting: "There is clear evidence that malicious actors are testing this tool. We hope to soon find more concrete examples of how hackers are using ChatGPT for nefarious purposes."

In response to the growing concerns about ChatGPT, many countries and organizations have taken action. In early February 2023, the European Union's Commissioner for Industry, Thierry Breton, stated that the EU would impose new regulations related to AI to address concerns about ChatGPT risks and ensure that users in Europe can trust AI technology.
According to Mr. Breton, the risks posed by ChatGPT and other AI systems have highlighted the urgency of new regulations. And, as the European Union's Commissioner for Industry stressed, one thing that absolutely must be done is: "People need to be warned that they are dealing with a chatbot, not a human being."
The ChatGPT application, and AI in general, is an irreversible global trend. Therefore, the most important thing is how to optimize the benefits and minimize the risks from these artificial intelligence products. Clearly, before we can find fundamental solutions to the dangers of ChatGPT, the core issue remains the attitude, awareness, and actions of the users. Among these, caution and vigilance are paramount.
According to experts, individual users should exercise caution when receiving information from ChatGPT, ask various types of questions, and verify the information provided by this chatbot before sharing or using it to avoid unnecessary legal consequences. In addition, users need to be vigilant in protecting their personal information, limiting the provision of work-related information and private data to applications similar to ChatGPT.
Ha Trang
Source






Comment (0)