Two of the world's leading technology companies – Alphabet (Google's parent company) and OpenAI – have just announced that their artificial intelligence (AI) models have demonstrated superior mathematical problem-solving capabilities, achieving scores equivalent to a gold medal at the 2025 International Mathematical Olympiad.
This information marks a significant milestone in AI's mathematical thinking capabilities. Both Alphabet's and OpenAI's AI systems correctly solved 5 out of 6 problems in this year's exam, surpassing the threshold to win gold medals in the competition.

AI is making leaps and bounds in its mathematical thinking capabilities (Image: iStock).
The International Mathematical Olympiad is considered the pinnacle of mathematics competition for high school students. This year, the competition brought together 630 contestants from more than 100 countries and territories.
The rapid improvement in AI's problem-solving abilities has led experts to expect that AI is getting closer to solving previously unsolved problems.
Professor Junehyuk Jung, a mathematics professor at Brown University (USA) and one of the researchers at Google DeepMind (Google's AI research unit), said: "When AI can solve complex reasoning problems using natural language, instead of relying on programming languages as before, this opens up the possibility of real collaboration between AI and mathematicians."
Meanwhile, OpenAI's model is optimized by enhancing its computational power during the experimental thinking process. This allows the AI to think longer, deeper, and to consider multiple lines of reasoning simultaneously. OpenAI researcher Noam Brown stated that the cost of building this type of AI model is "very expensive," but he did not disclose a specific figure.
According to Noam Brown, the progress AI has made in solving the problem presented at this year's International Mathematical Olympiad is clear evidence that AI is capable of handling complex sequences of reasoning, with the potential to expand to fields beyond mathematics.
To solve this year's math problem, Google used a general reasoning model called Gemini Deep Think. Unlike previous AI experiments that relied on programming languages, this latest Gemini model operates entirely in natural language and solves the problems within the competition's allotted time of 4.5 hours.
DeepMind, Google's AI research unit, said that last year they only achieved a score equivalent to a silver medal. Last year's AI model was specifically designed to handle mathematical tasks and ran on a programming language. This year, DeepMind switched to a more general model, using natural language, and the results show that their AI model has made significant progress.
Thus, both OpenAI and Google DeepMind have built their own test versions to solve problems for the International Mathematical Olympiad. The results have been very promising, but neither side intends to publicly release the model in the near future.
This year also marks the first time the International Mathematical Olympiad Organizing Committee has collaborated with several AI laboratories to test the mathematical thinking capabilities of current AI models. The International Mathematical Olympiad Organizing Committee will verify and certify the results of these AI research companies.
Demis Hassabis, director of Google DeepMind, said: “We adhere to the competition organizers’ requirements that AI labs only publish results after independent expert verification, and after participating students have received due recognition.”
Source: https://dantri.com.vn/giao-duc/ai-cua-google-va-openai-doat-huy-chuong-vang-olympic-toan-quoc-te-20250726093259564.htm






Comment (0)