![]() |
The DeepseekMath-V2 model achieves incredible performance in solving mathematical problems. Image: Decoder . |
Deepseek has just announced its DeepseekMath-V2 model, which has achieved gold medal-winning results in a number of prestigious mathematics competitions. This move places the Chinese company in direct competition with leading US AI labs, which are at the forefront of linguistic modeling reasoning capabilities.
According to technical documentation, DeepseekMath-V2 achieved gold medal results at the 2025 International Mathematical Olympiad (IMO) and the 2024 Chinese CMO competition. In the Putnam competition, the model scored 118 out of 120 points, far surpassing the 90-point mark achieved by the highest-scoring human contestant in history. These results demonstrate its ability to handle complex problems.
The Chinese company explained that the limitation of previous AI models lay in the fact that while they could provide correct answers, they couldn't present a valid solution. To overcome this, DeepseekMath-V2 uses a multi-stage process that includes evaluating correctness, checking arguments, and providing a final result. This operating method allows the system to self-review and refine the solution in real time, avoiding situations where the conclusion is correct but the reasoning is flawed.
Notably, Deepseek's documentation doesn't mention the use of external tools like computers or code interpreters in the model. The test setup shows that all results were generated entirely in natural language.
Deepseek emphasizes that its performance is achieved through self-criticism and self-correction of solutions, not relying on specialized mathematical software. For difficult problems, the system scales up the computation by generating multiple parallel solutions, thereby increasing the reliability of the final result.
The emergence of DeepseekMath-V2 comes shortly after OpenAI and Google DeepMind announced that their previously unpublished models also won gold medals at the IMO. This achievement was previously beyond the capabilities of large-scale language models. However, information about these systems is currently very limited, except that an OpenAI researcher stated that a more powerful mathematical version will appear in the coming months.
In contrast to the secrecy of American companies, Deepseek chooses to publicly disclose details of its methodology and technical architecture. This approach demonstrates transparency while also putting competitive pressure on the American AI ecosystem. According to The Economist , some American startups have considered switching to cheaper open-source models from China to save costs.
Source: https://znews.vn/deepseek-pha-ky-luc-post1606990.html







Comment (0)