The omnimodal model, called OmniHuman-1, can create dynamic videos of characters speaking, singing, and moving with "superior quality compared to current video creation methods," according to the ByteDance development team.

AI technology that creates realistic images, videos, and audio, also known as "deepfake," is increasingly being used in scams and entertainment.

ByteDance is currently one of the hottest AI companies in China. The company's Doubao app is the most popular among mainland users.

Although OmniHuman-1 has not yet been widely released to the public, sample videos have quickly spread widely.

One standout demo was a 23-second video depicting Albert Einstein giving a speech. TechCrunch described the app's output as "shockingly amazing" and "the most realistic deepfake videos to date."

The developers say that OmniHuman-1 only needs a single image as reference data along with audio data such as speech or singing to create a video of any length.

The output video frame rate can be adjusted, as can the "body proportions" of the characters within it.

d6a434e5a4dc974582b09c05b3646092afcf9490.jpeg
ByteDance is currently one of the most prominent AI companies in China. Photo: TechCrunch

Furthermore, the AI ​​model, trained on 19,000 hours of video content from unpublished sources, is capable of editing existing videos and even altering human hand and foot movements with a convincing degree.

However, ByteDance also admitted that OmniHuman-1 is not perfect, as it still struggles with certain poses, and that "low-quality reference images" will not produce the best video.

ByteDance's new AI model demonstrates China's progress despite Washington's efforts to restrict technology exports.

Concerns

Last year, political deepfakes spread globally. In Moldova, deepfake videos mimicked the country's president, Maia Sandu, giving her resignation speech.

And in South Africa, a deepfake of rapper Eminem supporting a South African opposition party has gone viral ahead of the country's elections.

Deepfakes are also increasingly being used to commit financial crimes. Consumers are being scammed by deepfakes of celebrities recommending investments and offering fake investment opportunities, while companies are losing millions of dollars to impersonators of senior executives.

According to Deloitte, AI-generated content contributed to over $12 billion in fraud losses in 2023 and could reach $40 billion in the US by 2027.

Last February, hundreds of people in the AI ​​community signed a letter calling for stricter regulations on deepfakes. While there are no federal laws criminalizing deepfakes in the US, more than 10 states have enacted laws against AI-powered forgery.

However, detecting deepfakes is not easy. Although some social media platforms and search engines have implemented measures to limit their spread, the amount of deepfake content online is still growing at an alarming rate.

In a May 2024 survey by identity verification company Jumio, 60% of participants reported encountering a deepfake in the past year; 72% of respondents said they worried about being tricked by deepfakes daily, while a majority supported passing legislation to address the proliferation of AI-generated fake videos.

Google 'greenlights' the use of AI in weapons and surveillance technology . Google has broken its promise not to design and deploy AI tools for use in weapons and surveillance technology.