Accordingly, a photo of an explosion near the Pentagon created by artificial intelligence (AI) was shared on American social media on May 22, causing the stock market to plummet in a short time. Thereby raising concerns about fake news originating from AI.
AI-generated fake photo of explosion near the Pentagon. |
Nick Waters of Bellingcat, an online fact-checking group, was quick to point out some notable problems with the photo. First, there are no eyewitnesses to corroborate the incident. The building in the photo also doesn't look like the Pentagon. Some unusual details, like the prominent lampposts and black columns protruding from the sidewalk, are signs that the photo isn't real.
There are many artificial intelligence (AI) reconstruction tools like Midjourney, Dall-e 2, and Stable Diffusion that can produce lifelike images. However, these tools will fill in the gaps when there is a lack of data.
Al Jazeera has come up with some measures to differentiate between AI-generated and real photos of major events when they appear on the internet as follows:
- In the event of an explosion or some other major event, there will often be factual information from multiple people and from multiple perspectives.
- Who is posting the content? Where are they located and where is the event? What accounts are they following and who is following them. Can you reach out or talk to them?
- Analyze images and surroundings: Look for clues in the image such as nearby locations, traffic signs, etc. to help you determine where or when an event might take place.
- For images of people, pay attention to their eyes, hands, and general posture. AI-generated videos that mimic humans, called deep fakes, tend to have problems with blinking because most training datasets don’t contain closed-eye faces. Hands won’t grasp objects properly.
- The skin of people in AI-generated images will often be smooth, and even their hair and teeth will be surreally perfect.
Source
Comment (0)