Speaking to CNBC , an Nvidia spokesperson called DeepSeek’s R1 model “a remarkable AI advancement.” DeepSeek’s work shows how new models can be created using a technique called Test Time Scaling. In Nvidia’s proposed Test Time Scaling technique, a fully trained AI model gets better at providing answers the more time it spends “reasoning” while making predictions or generating images or text.

deepseek sipa
Chinese AI startup DeepSeek is attracting attention from the global tech world. Photo: Sipa

Nvidia’s comments come after DeepSeek released R1, an open-source inference model that has been found to outperform the best American models. R1 cost DeepSeek less than $6 million to train, a fraction of the billions Silicon Valley spends developing its AI models, according to DeepSeek.

The DeepSeek “shock” caused tech stocks around the world to collapse. On January 27, Nvidia shares fell 17% in value, equivalent to $600 billion in market capitalization being “blown away”, the largest loss in the history of American businesses.

Another big name in the AI ​​world—Yann LeCun, chief AI scientist at Meta—also praised DeepSeek, saying it demonstrated that “open source models are surpassing proprietary models.” “They come up with new ideas and build on other people’s work. Because their work is public and open source, everyone can benefit from it. That’s the power of open research and open source,” he wrote on Threads.

Nvidia’s statement could be interpreted as saying that DeepSeek’s breakthrough is creating more work for the company’s graphics processing units (GPUs). Inference requires a huge number of GPUs, an Nvidia spokesperson explained. Additionally, the spokesperson asserted that the GPUs used by DeepSeek are fully compliant with US export control regulations.

Analysts are questioning whether billions of dollars invested by companies like Microsoft, Google and Meta in AI infrastructure are being wasted when similar results can be achieved at a cheaper cost.

Microsoft said in early January that it would spend $80 billion on its own AI infrastructure by 2025, while Meta CEO Mark Zuckerberg recently said he plans to invest $60 billion to $65 billion in capital expenditures this year as part of a larger AI strategy. Zuckerberg himself is a proponent of open source models.

In September 2024, he said that the goal in the next 10-15 years is to create a new generation of open platforms and help open platforms “win”, leading to a more dynamic technology industry.

(According to Insider, CNBC)