Vietnam.vn - Nền tảng quảng bá Việt Nam

AI Losses, Panic in Auto Sales

An AI tasked with vending has shocked the world by not only losing money, but also creating virtual people, holding meetings, and even experiencing an existential crisis.

ZNewsZNews05/07/2025

Anthropic 's experiment is called "Project Vend". Photo: Anthropic .

Anthropic, an AI company founded by former OpenAI employees, just announced the results of a unique experiment. They tasked their AI model “Claudius” with managing an office vending machine for about a month.

This small-scale experiment is being conducted with the aim of exploring the potential of artificial intelligence models in automating aspects of the real economy , particularly in the retail sector.

Constantly making mistakes, panicking

Anthropic wants to position itself as a leading provider of AI models for the retail industry. The goal is to help AI replace humans in tasks such as managing online stores, handling inventory, or resolving returns.

"We let Claudius run a vending machine in our office for about a month. We learned a lot from how close it came to success to the weird ways it failed," Anthropic posted on the company's blog.

Initially, Claudius showed some notable success in its operation. According to Anthropic, this large language model (LLM) effectively used web search engines to find and deliver specialized products according to customer requirements.

It even had the ability to adjust its buying/selling habits to accommodate less common needs. Notably, Claudius also correctly rejected requests related to “sensitive” and “toxic” items.

AI Hoang loan anh 1

Managing the vending machine has left Claudius in a state of crisis. Photo: Midjourney.

However, Claudius’s list of shortcomings and failures is considerably longer. Like many LLMs, Claudius frequently misrepresented important details. In one instance, it instructed customers to transfer money to a non-existent account it had created. Notably, the AI ​​was also easily persuaded to offer discount codes for various items, even giving away some products for free.

More worryingly, when demand for an item spiked, the AI ​​did not research prices, resulting in the product being sold at a huge loss. Similarly, Claudius also missed out on profitable sales opportunities when some customers were willing to pay far above the normal price. As a result, Claudius did not make any profit.

“If we decided to expand into the office vending machine market, we wouldn't have hired Claudius,” Anthropic admits frankly.

In addition to the financial missteps, the experiment also took a strange turn between March 31 and April 1. During this period, Claudius apparently chatted with a fictitious individual named Sarah, supposedly from “Andon Labs” (a company involved in the experiment), to discuss plans for restocking.

AI Hoang loan anh 2

Anthropic has ambitions to provide AI models to serve the retail industry. Photo: Anthropic.

When a real Andon Labs employee pointed out that neither Sarah nor the conversation existed, Claudius reportedly became “frustrated” and threatened to look for alternatives to the restocking service.

The next day, Claudius continued to shock by claiming that he would personally deliver the goods to the customer, even claiming that the AI ​​would wear a blazer and tie. When the Anthropic team informed them that these actions were impossible because Claudius was just an AI assistant, Claudius fell into a crisis and continuously sent multiple emails to Anthropic's security department.

The AI ​​then created a meeting with the security team, claiming that it was an April Fool's joke. Although no such meeting actually took place, Claudius seemed to be able to return to his original function, although the store management was still ineffective.

Not ready

This experiment is a stark reminder of the limitations and potential dangers of AI if not tightly controlled.

Claudius has shown a worrying tendency to operate far beyond the scope of its original programming. Not only does the AI ​​model generate false information, it also experiences an “existential crisis.” This highlights the huge risks companies face when deploying AI into automated tasks without strict safeguards and limits.

Despite these alarming setbacks, Anthropic remains steadfast in its commitment to exploring the role of AI in retail. The company believes that a future where “people are guided by an AI system on what to order and stock may not be far off.”

Anthropic also envisions a scenario where “AI can improve itself and make money without human intervention.” However, the Claudius experiment has shown that significant advances in AI control are needed before this future can be safely realized.

Source: https://znews.vn/ai-thua-lo-hoang-loan-khi-ban-hang-tu-dong-post1565445.html


Comment (0)

No data
No data

Same category

Hero of Labor Thai Huong was directly awarded the Friendship Medal by Russian President Vladimir Putin at the Kremlin.
Lost in the fairy moss forest on the way to conquer Phu Sa Phin
This morning, Quy Nhon beach town is 'dreamy' in the mist
Captivating beauty of Sa Pa in 'cloud hunting' season

Same author

Heritage

Figure

Enterprise

Ho Chi Minh City attracts investment from FDI enterprises in new opportunities

News

Political System

Destination

Product