![]() |
| Paired X-ray images, consisting of real images and images generated by GPT-4o at different locations on the body, demonstrate that AI can produce highly realistic medical images. (Source: RSNA) |
On March 24, a study by a group of 18 radiologists from 12 medical institutions in 6 countries (including the US, France, Germany, Turkey, the UK, and the UAE), led by Dr. Mickael Tordjman, a postdoctoral researcher at the Icahn School of Medicine, Mount Sinai (New York, USA), was published in the Radiology journal of the Radiological Society of North America (RSNA).
Research shows that X-ray images generated by artificial intelligence (AI) are now so realistic that not only humans but even advanced AI systems are having difficulty identifying them.
Of the 264 X-ray images used in the study, half were real and half were AI-generated. The participating physicians evaluated two separate datasets, including multiple body regions and a specialized chest X-ray dataset, generated using AI models such as ChatGPT and RoentGen.
The results showed that, without prior notification of the existence of fake images, doctors correctly identified only 41% of AI-generated images. When alerted, this rate increased to 75%. However, the accuracy varied significantly among individuals, ranging from 58% to 92%.
Not only humans, but multimodal AI systems also face similar difficulties. The four models tested achieved accuracy rates ranging from 57% to 85%. Notably, even the system involved in creating the fake images could not fully identify its own creations.
Another noteworthy finding is that years of experience did not improve the ability to detect artifact-like images. However, orthopedic specialists performed better than those in other specialties.
According to the research team, spurious X-ray images often possess subtle visual features such as overly smooth bone structure, abnormally straight spine, perfectly symmetrical lungs, evenly distributed blood vessels, and fractures that appear "too beautiful," often only appearing on one side of the bone.
The discovery raises concerns that fake images could be used in lawsuits or inserted into hospital systems to manipulate diagnoses, disrupt clinical practice, and undermine trust in digital medical records.
To mitigate risks, researchers suggest implementing security measures such as embedding invisible watermarks into images and using cryptographic signatures linked to the photographer to verify data origin.
According to Dr. Tordjman, what is observed so far may only be the tip of the iceberg. The next step in this technology could be creating 3D medical images like CT or MRI using AI. In that context, building training datasets and detection tools now is considered a critical requirement to protect the accuracy of medical diagnoses.
Source: https://baoquocte.vn/bac-si-co-the-bi-ai-danh-lua-373657.html







Comment (0)