Internet detectives abuse AI to investigate cases
The FBI recently released two blurry photos of a “person of interest” in connection with the shooting of right-wing activist Charlie Kirk. The images were shared on the X platform (formerly Twitter).
Soon after the photo was released, many internet users used AI tools to “upgrade” the blurry photo into a clear one. Some used X’s Grok bot, others used ChatGPT or other AI tools. However, AI cannot exactly recreate the real image, but only speculates based on available data.

The AI-generated photos are so different from the original that the FBI is confused. (Source: Thevegre)
Some of the AI-generated images have obvious flaws, like different shirts or faces with unrealistic “Gigachad” chins. This is not only misleading, but can also affect investigations.
In the past, AI has depixelated photos of President Barack Obama to show a white man, or added fake details to photos of President Donald Trump. These examples show that AI can create misinformation if misused.
While they may seem useful, AI-enhanced images should not be considered definitive evidence in an investigation. The FBI still recommends using the original photos to avoid bias.
The AI Darwin Awards “honor” the worst AI applications
A new award called the “AI Darwin Awards” has just been announced to “honor” bad, dangerous or thoughtless applications of artificial intelligence (AI). The goal is to create a public awareness of how humans are using AI carelessly.
To be nominated, individuals or organizations must demonstrate “spectacular failure” in their AI deployment, ignoring obvious warning signs. Points are awarded for cases that cause an emergency response, make headlines, or create new areas of AI safety research.

AI applications will be evaluated and even "honored" by users if their quality is poor. (Source: AP)
An interesting point: The organizers use AI itself to verify the nominations. They ask big language models like ChatGPT, Claude, and Gemini to score the authenticity of the story. If the average score exceeds 5/10, the nomination is considered valid.
McDonald's is the first potential winner because its recruiting chatbot “Olivia” had a default password of 123456, exposing the data of 64 million people. OpenAI is also nominated because GPT-5 can make dangerous requests without realizing their malicious nature.
The winner will be announced in February after a public vote in January. The only prize is “immortal internet fame” as a prime example of how not to use AI.
Alibaba launches new generation AI model
On September 12, 2025, Alibaba officially announced and open-sourced a new artificial intelligence model called Qwen3-Next, built on the modern Qwen3-Next architecture. This model is 10 times more powerful but the training cost is only 1/10 of the previous version.
According to the Qwen development team, the new model uses many architectural improvements to optimize performance and reduce computational costs. Based on the Hugging Face platform, the Qwen3-Next-80B-A3B version with 80 billion parameters has been announced, achieving processing speed 10 times faster than the Qwen3-32B released in April.

Qwen - Alibaba's new AI model. (Source: SCMP)
Alibaba Cloud said the new model can be deployed efficiently on consumer hardware, while achieving performance comparable to its high-end model Qwen3-235B-A22B.
The open-sourcing of the code shows Alibaba is continuing to close the gap with its US rivals by building the world's largest open AI ecosystem, allowing third-party developers to flexibly use, modify and distribute the model.
Source: https://vtcnews.vn/cong-nghe-13-9-tham-tu-internet-lam-dung-ai-cong-bo-giai-thuong-ai-te-nhat-ar965176.html






Comment (0)