
More and more images and videos edited and added by AI are posted on social networks to attract views - Photo: DUC THIEN
If in the past, to create a fake photo, normal users had to hire Photoshop experts, with AI today, in less than 5 minutes and with some basic skills, a set of fake photos can appear everywhere.
When AI creates images, videos, songs... so real it's hard to believe
Surfing Facebook regularly every day, Ms. Hoan My (HCMC) - a communications staff - said that there is more and more content with the intervention of AI tools being posted by users.
"There are many photos posted by friends or group members that are so real that I would not be able to tell if AI was used if they did not say so. Even more and more videos, from animations, imaginations to short films, or videos related to an event happening in real life are created by AI tools but I would not be able to know," Ms. My shared.
Not only images and videos, even on the social network YouTube, there are many songs performed or composed by "AI singers". Many users are excited about the new experience and compete to share it on social networks with their friends.
"I can't tell if it's an AI singer if it's not announced. AI content is increasingly widespread in Vietnam's cyberspace. Ordinary users like me will probably soon no longer know which content is from real people, which content is from AI or has AI intervention," said Khanh Ha (HCMC).
Recently, many social networking platforms in Vietnam have featured videos and livestreams selling products from people with disabilities, making the online community more sympathetic and sharing.
However, many cases were later exposed by the online community for using AI to turn normal people into disabled people. The purpose of this trick is to take advantage of the compassion of viewers to sell goods or serve fraudulent acts.
Many technology experts say that recently, social networks have seen an increasing number of accounts posting sales videos with images of disabled, sick or disadvantaged people to appeal for sympathy. However, the reality shows that much of this content is created or edited with AI tools, turning normal people into disabled people for profit or fraud purposes.
Sharing with Tuoi Tre, Mr. Milko Radotic, Vice President in charge of Asia - Pacific region of iProov - the world's leading biometric authentication technology provider, warned: "Our research shows that 99.9% of participants cannot recognize deepfakes. Even today's advanced detection tools face many difficulties."
Increase resistance for digital users
Mr. Dang Huu Son - Deputy Director of the Institute for Applied Research and Human Resource Development AIOV, Founder and CEO of LovinBot AI Company - said that since Google launched AI technology Nano Banana, Veo 3; ByteDance launched Seedream 4, and before that OpenAI launched GPT Image, the AI market has been extremely bustling and has also brought about many consequences.
"Technology has never been as accessible as it is today. These are all extremely powerful AI models that can "copy faces" almost 90%, even 99% compared to the original photo," Mr. Son commented.
If in the past, to create a fake photo, normal users had to hire Photoshop experts to do it, then with AI today, in less than 5 minutes with some basic skills, a set of fake photos, a fake video that is cut and pasted, and has added news-like effects can spread at a dizzying speed.
"Fake news is repeated many times, with high density and easy access, so a large number of people who come into contact with it every day will believe it is true. Therefore, solutions are needed to limit the above problems," Mr. Son noted.
According to Mr. Trinh Nguyen Thien Phuoc - Technology Director of Gianty Vietnam Company, to be able to distinguish AI-generated content, three layers are needed: (1) user verification skills (along with golden rules); (2) technical tools such as Content Credentials (C2PA), watermarking identification signs (SynthID, AudioSeal) and deepfake detector; (3) coordination process between related parties such as: platforms, network operators, banks...
In particular, the golden rule must be done very quickly within 30 - 60 seconds, including: do not click on strange links, and go to the official app/website to verify; stop immediately when encountering keywords that force action (OTP/money transfer/urgent).
At the same time, users should verify with two independent sources before believing; check the date, time and context (old news disguised as new); call back the official number you found yourself, do not use the number given by others; with images and videos, users can search back for related results...
Many experts believe that the most fundamental solution to deal with the wave of fake content created by AI is to raise public awareness. Instead of focusing only on technical measures, it is necessary to build training programs, competitions or simulate real-life situations to help people change their information intake habits.
For young people, especially students, combining online contests, media campaigns with celebrities or KOLs can create natural attraction, helping them learn and experience while still mastering the skills to identify fake news and fake videos. Meanwhile, the elderly, especially in rural areas, need more familiar and intuitive forms such as illustrated leaflets, instructional videos or screenings at cultural houses and community activities.
"An approach that is appropriate for each group of people will help create a solid, long-term awareness foundation, instead of just stopping at short-term activities. For example, some units have developed extensions that can be installed on the Chrome browser to support users in verifying news when surfing the web or social networks...", Mr. Dang Huu Son shared.
Using AI to detect fake AI
Recently, Honor phone company has integrated AI Deepfake Detection feature in its newly launched phone line. This feature can analyze images and voices in 3 seconds to warn of fakes right on applications such as Messenger, Zalo, Viber or Google Meet, becoming a "digital shield" to protect users from high-tech scams.
Previously, in September 2025, MoMo announced its cooperation with iProov to deploy Dynamic Liveness technology to help verify that users are real, have the correct identity and are present at the time of transaction, preventing fraud using deepfake, fake videos or images.
In high-risk sectors like finance, this level of security is crucial to preventing fraud, preventing unauthorized access, and maintaining user trust in the platform.
Source: https://tuoitre.vn/tu-hinh-anh-video-den-ca-khuc-ai-can-trong-con-loc-noi-dung-ai-tren-mang-xa-hoi-20251109232702736.htm






Comment (0)