Vietnam.vn - Nền tảng quảng bá Việt Nam

AI Losses, Panic in Auto Sales

An AI tasked with vending has shocked the world by not only losing money, but also creating virtual people, holding meetings, and even experiencing an existential crisis.

ZNewsZNews06/07/2025

Anthropic 's experiment is called "Project Vend". Photo: Anthropic .

Anthropic, an AI company founded by former OpenAI employees, just released the results of a unique experiment. They tasked their AI model “Claudius” with managing a vending machine in an office for about a month.

This small-scale experiment is being conducted with the aim of exploring the potential of artificial intelligence models to automate aspects of the real economy , particularly in the retail sector.

Constantly making mistakes, panicking

Anthropic wants to position itself as a leading provider of AI models for the retail industry. The goal is to help AI replace humans in tasks such as managing online stores, handling inventory, or resolving returns.

"We had Claudius run a vending machine in our office for about a month. We learned a lot from how close it came to success to the weird ways it failed," Anthropic posted on the company's blog.

Initially, Claudius showed some notable operational success. According to Anthropic, this large language model (LLM) effectively used web search engines to find and deliver specialized products according to customer requirements.

It even had the ability to adjust its buying/selling habits to accommodate less common needs. Notably, Claudius also correctly rejected requests involving “sensitive” and “toxic” items.

AI Hoang loan anh 1

Managing the vending machine has left Claudius in a state of crisis. Photo: Midjourney.

However, Claudius’s list of shortcomings and failures is considerably longer. Like many LLMs, Claudius frequently misrepresented key details. In one instance, it instructed customers to transfer money to a non-existent account that it had fabricated. Notably, the AI ​​was also easily convinced to offer discount codes for various items, even giving away some products for free.

More worryingly, when demand for an item spiked, the AI ​​did not research pricing, resulting in the product being sold at a huge loss. Similarly, Claudius also missed out on profitable sales opportunities when some customers were willing to pay far above the normal price. As a result, Claudius did not make any profit.

“If we decided to expand into the office vending machine market, we wouldn't hire Claudius,” Anthropic frankly admits.

In addition to the financial mishaps, the experiment took a strange turn from March 31 to April 1. During this period, Claudius apparently chatted with a fictitious individual named Sarah, supposedly from “Andon Labs” (a company involved in the experiment), to discuss plans for restocking.

AI Hoang loan anh 2

Anthropic has ambitions to provide AI models to serve the retail industry. Photo: Anthropic.

When a real Andon Labs employee pointed out that neither Sarah nor the conversation existed, Claudius reportedly became “frustrated” and threatened to look for alternatives to the replenishment service.

The next day, Claudius continued to shock customers by claiming that he would personally deliver the goods to customers, even claiming that the AI ​​would wear a blazer and tie. When the Anthropic team informed them that these actions were impossible because Claudius was just an AI assistant, Claudius fell into a crisis and continuously sent multiple emails to Anthropic's security department.

The AI ​​then created a meeting with the security team, claiming that it was an April Fool's joke. Although no such meeting actually took place, Claudius seemed to be able to return to his original function, although the store management was still ineffective.

Not ready

The experiment is a stark reminder of the limitations and potential dangers of AI if not tightly controlled.

Claudius has shown a worrying tendency to operate far beyond the scope of its original programming. Not only does the AI ​​model generate false information, it also experiences an “existential crisis.” This highlights the huge risks companies can face if they deploy AI into automated tasks without strict safeguards and limits.

Despite these alarming setbacks, Anthropic remains steadfast in its commitment to exploring the role of AI in retail. The company believes that a future where “humans are guided by an AI system on what to order and stock may not be far away.”

Anthropic also envisions a scenario where “AI can improve itself and make money without human intervention.” However, the Claudius experiment showed that significant advances in AI control are needed before this future can be safely realized.

Source: https://znews.vn/ai-thua-lo-hoang-loan-khi-ban-hang-tu-dong-post1565445.html


Comment (0)

No data
No data
PIECES of HUE - Pieces of Hue
Magical scene on the 'upside down bowl' tea hill in Phu Tho
3 islands in the Central region are likened to Maldives, attracting tourists in the summer
Watch the sparkling Quy Nhon coastal city of Gia Lai at night
Image of terraced fields in Phu Tho, gently sloping, bright and beautiful like mirrors before the planting season
Z121 Factory is ready for the International Fireworks Final Night
Famous travel magazine praises Son Doong cave as 'the most magnificent on the planet'
Mysterious cave attracts Western tourists, likened to 'Phong Nha cave' in Thanh Hoa
Discover the poetic beauty of Vinh Hy Bay
How is the most expensive tea in Hanoi, priced at over 10 million VND/kg, processed?

Heritage

Figure

Business

No videos available

News

Political System

Local

Product