Google has launched a new tool to help people spot AI-generated images, after warning that AI could be a source of misinformation. Google Image users will now be able to see “additional context” alongside images, including details about when the image first appeared on Google, as well as any related news. The tool aims to provide information about fake images so users are not misled.
Google introduced About This Image after a series of fake photos that looked real spread on the Internet, including a photo of former US President Donald Trump being arrested.
Tools like Midjourney, Stable Diffusion, DALL-E can be used to create hyper-realistic images of situations that have never happened with just a few commands, words and phrases. Some officials like UK cybersecurity chief Sir Jeremy Fleming have warned that these AI tools could have dire consequences.
One of the most famous examples of AI-generated imagery is the photo of the Pope wearing a white feathered coat that went viral in March. The author of the photo – Pablo Xavier, a Chicago construction worker – claimed he created it while looking for something funny.
A photo of Mr Trump being arrested by police was also widely shared earlier this year. Eliot Higgins, co-founder of the investigative journalism website Bellingcat, created the photo using Midjourney.
With background information about an image, people will understand whether the photo is trustworthy or not, said Cory Dunton, Google's product manager for search.
(According to The Guardian)
Source
Comment (0)