Vietnam.vn - Nền tảng quảng bá Việt Nam

Using AI to manipulate sensitive images for blackmail purposes.

From a staged photo posted on social media, malicious actors can use artificial intelligence (AI) to create suggestive or even nude images, or superimpose realistic faces onto sensitive videos to defame or extort the victim.

Báo Tuổi TrẻBáo Tuổi Trẻ17/03/2025

Dùng AI ghép hình ảnh nhạy cảm để tống tiền - Ảnh 1.

Malicious actors can use AI to manipulate sensitive images and videos to defame or extort others - Photo: AI drawing.

Previously, when discussing the technique of superimposing one person's face onto another's body, people often used Photoshop or other specialized photo editing software – which was labor-intensive and easily noticeable to the naked eye.

Now, with AI tools, people can do that very easily in "a few moments," but it's not easily detected with the naked eye.

Face-swapping in sex videos for blackmail.

Due to work connections, Mr. H. (the director of a business in Ho Chi Minh City) frequently interacts on social media.

On one occasion, a young woman befriended him to inquire about his work. After some time chatting and sharing stories about work and personal life, they became quite close.

There were also many text messages between the two, filled with affectionate words, along with pictures sent back and forth, and video calls so they could see each other's faces.

One day, Mr. H. received a call from someone claiming to be the girl's husband. After a "warning" session, the "husband" demanded that Mr. H. transfer compensation money or risk having screenshots of their chat logs and sex videos sent to his relatives and business partners.

Then Mr. H. was shown a sex video of him and the other woman by his "husband," along with a video of their phone call and screenshots of their intimate text messages...

"I've never met that girl, so I'm sure the sex video is edited," Mr. H. asserted. However, the girl's face in the sex video and the girl in his video call were identical, along with screenshots of romantic messages, forcing Mr. H. to repeatedly transfer money to the "husband" to protect his reputation.

According to Tuoi Tre newspaper 's investigation, Mr. H. was the victim of an organized scam following a perfectly orchestrated plan. In this scam, Mr. H. was fooled twice by deepfake technology. The first time was a video call, and the second was a sex video.

Notably, in the sex video, Mr. H.'s face was so skillfully superimposed that viewers found it hard to believe it was a fake video. An AI expert confirmed to Tuổi Trẻ newspaper that the perpetrator used a new AI tool capable of perfectly superimposing faces onto videos... making them look incredibly realistic.

There are countless AI face-swapping tools.

There are now many AI-powered tools with features such as creating any human figure with a desired face, or superimposing a desired face onto another person's body.

Among these tools are those with features that serve malicious purposes, such as transforming ordinary photos into nude images by "stripping" the person in the photo, or replacing a person's face in videos with a desired face...

Most of these AI tools offer a trial period; a paid version provides full functionality and very fast transformation times.

For example, with the feature that swaps faces onto other people's bodies, users simply upload two corresponding photos and wait a few seconds for the result.

Or, with the feature of creating nude images from regular photos, AI tools can do it very quickly, in just a few tens of seconds, shocking viewers.

Even the feature of replacing faces in sex videos is done very quickly by the AI ​​tool, taking less than a minute for videos that are only a few seconds long.

Speaking to Tuổi Trẻ newspaper , Mr. Nguyen Kim Tho, head of the research and development team at VNetwork cybersecurity company, analyzed the term "deepfake," which refers to fake videos and images created using AI to deceive viewers into believing they are real.

Deepfake technology uses deep learning models such as neural networks to analyze a person's facial and voice data and create fake content that looks exactly like that person.

"Thanks to powerful algorithms, malicious actors can superimpose victims' faces onto sensitive videos (e.g., sex videos, pornography) or edit images to create fake nude photos."

"Deepfake software and applications are now very common on the Internet, even including mobile apps, open-source software, or free online services – making it easy for anyone to access tools to create fake videos and photos, so people need to be vigilant," Mr. Tho said.

Stay away from "sex chat".

Many scams begin with befriending people online and then luring victims into "sex chat" or sending sensitive photos. Therefore, users should absolutely not send private images or videos to people they only know online, regardless of promises or threats.

Always remember that any content you share (even via private messages) can be recorded (screenshots, videos) and then edited and used against you.

If you're making a video call, be wary of strangers offering sensitive content – ​​it could be a fake screen or they're recording you.

Cybersecurity experts advise users to absolutely avoid trusting anyone based solely on online interactions and to verify the identity and intentions of the other party before sharing anything sensitive.

In addition, if you receive a threatening message or call about the release of "nude" photos for blackmail, you don't need to be afraid or rush to transfer money.

According to experts, paying money doesn't guarantee the perpetrators will delete the video; they may continue to demand more or still upload the content online.

Instead of complying with the criminals' demands, users should gather evidence (messages, phone numbers, contact accounts, threatening content, etc.) and immediately report it to the nearest police station or report the incident through the Ministry of Public Security 's VNeID application for timely assistance.

How can we identify them?

According to Mr. Nguyen Kim Tho, distinguishing between real and deepfake images and videos is becoming increasingly difficult, but there are still some identifying signs and tools to help.

Users can observe this manually, as deepfake content sometimes has visual and audio anomalies.

For example, a composite image might reveal noise and color discrepancies between the superimposed face and body. A fake video might have mismatched audio and video (lip movements not matching speech) or facial expressions that appear stiff and unnatural.

Previously, some deepfake videos showed characters not blinking, or the lighting and shadows on their faces didn't match the background – these were signs that the video had been edited.

Although technology is improving (for example, new deepfakes have added realistic blinking movements), discerning viewers can still spot some illogical details in videos and photos.

Researchers are now developing various algorithms to automatically detect traces of deepfakes.

AI-generated content often leaves a unique "digital fingerprint" on each pixel that the machine can recognize.

For example, Intel has introduced the first real-time deepfake detector, capable of analyzing video and determining whether the characters in it are real people or AI-generated.

Additionally, some websites allow users to upload videos and photos to check their authenticity (for example, Deepware, Sensity AI...). These systems are constantly updated to keep up with new deepfake techniques.

In particular, users can check the source and context when encountering a sensitive video or image, as many deepfake videos are spread through fake accounts or bots on social media.

"If sensitive content about a person originates from an unofficial source or anonymous account, its authenticity should be questioned. It might be worth trying to contact the person featured in the content directly to verify if they actually did it."

Furthermore, projects like this are promoting the embedding of authentication information (digital signatures) into images and videos right from creation, helping to distinguish between original content and edited content. In the future, users may be able to use these authentication markers to identify trustworthy content," Mr. Tho said.

5 tips for self-protection

1. Limit sharing sensitive and private photos and videos (especially nude photos, family photos, and photos of children).

2. Set your social media accounts to private (only share with trusted people).

3. Avoid providing too much personal information (full name, phone number, address, etc.) on public social media.

4. Set strong passwords and enable two-factor authentication for your accounts to prevent hacking.

5. Regularly search for your name and image on Google (or use reverse image search tools like Google Image or TinEye) to see if any of your photos have been posted illegally, and promptly request their removal.

Read more Back to the Home page
VIRTUE

Source: https://tuoitre.vn/dung-ai-ghep-hinh-anh-nhay-cam-de-tong-tien-20250317075948373.htm


Comment (0)

Please leave a comment to share your feelings!

Same category

Admire the dazzling churches, a 'super hot' check-in spot this Christmas season.
The 150-year-old 'Pink Cathedral' shines brightly this Christmas season.
At this Hanoi pho restaurant, they make their own pho noodles for 200,000 VND, and customers must order in advance.
The Christmas atmosphere is vibrant on the streets of Hanoi.

Same author

Heritage

Figure

Enterprise

The 8-meter-tall Christmas star illuminating Notre Dame Cathedral in Ho Chi Minh City is particularly striking.

News

Political System

Destination

Product