Google’s Gemini AI Wins a Gold Medal at International Math Olympiad

While this may seem like a “computers are good at math” moment, it is fundamentally different. The Gemini model used here employs end-to-end natural processing, utilizing text inputs from problem descriptions to generate mathematical proofs in a manner that IMO graders found clear and precise. Google’s performance this year was due to an upgraded version of Gemini Deep Think, an enhanced reasoning layer designed to tackle complex questions. The design integrates the company’s latest research, including parallel thinking, allowing the model to explore and synthesize multiple solution paths simultaneously before committing to a final answer, thereby moving beyond a single linear chain of reasoning. All of this is proof that AI’s reasoning is slowly advancing into self-sufficient data processing, which can be achieved through a multistep, layered approach. Google’s Gemini officially completed all of this within a 4.5-hour time window, qualifying it for an achievement.


Exact compute costs are unknown, but running a model for 4.5 hours can be quite expensive, especially at the multi-trillion parameter size of the highest-end models on Google TPUs, with test time scaling enabled. Google will soon provide its Deep Think model to Gemini Ultra subscribers, which allows for higher usage rate limits for $249.99/month.