Internet users use AI to create images of Charlie Kirk's murder suspect. Photo: CBC . |
The US Federal Bureau of Investigation (FBI) has released two photos showing the suspect involved in the shooting that killed activist Charlie Kirk. Because the photos are quite blurry, the online community quickly responded with "upgraded" versions of the photos using artificial intelligence (AI).
These images look sharper and clearer than the originals, but they are actually just a product of speculation. AI doesn't have the ability to " discover " hidden details in a blurry photo. Instead, the tool predicts, filling in the gaps based on learned data, which can mean creating details that don't exist in reality.
![]() |
AI-enhanced photos of the suspect in the Charlie Kirk shooting, but no two faces are the same. Photo: Grok. |
Several enhanced versions of the images were posted under the FBI post. Some of the images may have been created using the Grok tool on X, while others are believed to have been created using platforms like ChatGPT and Gemini. However, the accuracy of these renderings varies greatly. Some images even show obvious inaccuracies, such as changing the color of the shirt or making the character’s face look different with an exaggerated chin.
It seems that these images were shared with the reason of "helping" the community identify the suspect. But in reality, they also became a tool to attract attention, to increase likes and shares on social networks.
Experts warn that it’s unlikely that these AI images will be more useful than the FBI’s originals. In fact, in some past cases, AI-enhanced images have produced controversial results.
For example, a low-resolution photo of former President Barack Obama was “upgraded” by AI to show a white man. In another case, a photo of President Donald Trump had a non-existent tumor added to his head by AI.
These discrepancies suggest that AI is simply extrapolating from existing data, rather than accurately recreating what is lost in blurry images. While the technology may be useful in some contexts, AI-generated images should not be considered definitive evidence in criminal investigations.
The FBI itself, in its announcement, only released the original photo and called on the public to provide information if they could identify the person in the photo. However, the wave of "self-upgrading photos" on social networks quickly attracted attention, raising questions about the responsibility of using AI tools in sensitive cases related to security and human life.
As artificial intelligence becomes more widely used, this incident highlights a problem: AI can create images that look so real that they are visually deceptive, but that does not mean they are reliable. Especially in criminal pursuits, inaccurate data can lead to serious consequences.
Source: https://znews.vn/hinh-anh-nghi-pham-am-sat-charlie-kirk-bi-lam-gia-post1584664.html
Comment (0)