Google DeepMind has developed an AI system that surpasses the average performance of gold medalists in solving geometry problems from the International Mathematical Olympiad (IMO). As reported by Kyle Wiggers, the new system called AlphaGeometry2 successfully solved 84% of geometry problems from the past 25 years of IMO competitions. The AI combines a Gemini language model with a symbolic engine to analyze geometric diagrams and develop mathematical proofs. In testing, AlphaGeometry2 solved 42 out of 50 IMO problems, exceeding the average gold medalist score of 40.9. The system was trained on over 300 million synthetic theorems and proofs due to limited real-world training data. However, it still struggles with certain problem types, including those involving nonlinear equations and inequalities.
DeepMind’s AlphaGeometry2 outperforms math olympiad gold medalists
Related posts:
Tags: DeepMind