OpenAI is a tech giant with a product-based business model. It’s a tech company rooted in products. This includes ChatGPT. People around the world send 2.5 billion requests per day to ChatGPT. Its original mission was to serve as a lab for research that would not only create an “artificial intelligence” but also ensure that it benefited all of humanity.
Will Douglas Heaven, my colleague, recently had an exclusive interview with the two OpenAI figures most responsible for pursuing these latter ambitions: Chief Research Officer Mark Chen and Chief Scientist Jakub Pachocki. You must read this article if you haven’t done so already.
Will points out that OpenAI has recently won two awards in its quest to create AI that can outperform humans. Its models came in second place at an elite coding competition, and along with those from Google DeepMind, achieved gold-medal results at the 2025 International Math Olympiad.
Those who think AI is not a real competitor to human intelligence may find some comfort from this. AI excels at mathematical and analytical tasks, as demonstrated by coding competitions and olympiads. It’s not good at navigating the complexities of human emotions, making difficult decisions, or creating art. But OpenAI’s research heads are not inclined to make that distinction between machine-like reasoning, and the ability of thinking creatively.
Pachocki said, “We’re discussing programming and math.” “But it’s about creativity, coming with novel ideas, and connecting ideas from various places.”
This is why, according to the researchers, these testing grounds will produce models with an increasing ability of reasoning like a human, which is one of the main goals OpenAI works toward. Even the best reasoning models are limited in their ability to combine information and solve problems logically.
OpenAI’s researchers are investing a lot of money and time in this problem, not because they think it will lead to higher scores in math contests but because they think it will help their AI models get closer to human intelligence. Will recalls that he thought it was fine for AI to excel in math and coding but not the idea of an AI acquiring people skills and replacing politicians. Will Douglas Heaven has the full story. Chen made a face, and looked at the ceiling. “Why not?”
Originally published in The Algorithm – our weekly AI newsletter. Get stories like this delivered to your inbox. Sign up here

