paint-brush
Is OpenAI’s o3 Finally Thinking Like a Human?by@niteshpadghan
610 reads
610 reads

Is OpenAI’s o3 Finally Thinking Like a Human?

by Nitesh PadghanDecember 31st, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

OpenAI’s o3 model brings a leap in AI reasoning, excelling in coding, math, and general intelligence benchmarks. Could this be the next step toward true AGI?
featured image - Is OpenAI’s o3 Finally Thinking Like a Human?
Nitesh Padghan HackerNoon profile picture

Imagine this: You ask your AI assistant a question, and instead of spitting out a half-baked answer in milliseconds, it pauses.


It thinks. It reasons.


And then, it delivers a response so well thought-out, it feels almost…human.


Sounds futuristic, right?


Well, welcome to the o3 model, OpenAI’s latest creation that promises to change the game entirely.


For years, AI has been stuck in a pattern—faster responses, flashier outputs, but not necessarily smarter ones.


With o3, OpenAI is saying, “Slow down. Let’s do this right.”

First Things First: What Is o3?

When OpenAI unveiled o3 during its 12-day “shipmas” event, it wasn’t just another announcement in a crowded AI market.


This model, they claimed, is not just smarter—it’s more thoughtful.


At its core, o3 is part of OpenAI’s family of “reasoning models.”


Unlike traditional AI, which often relies on brute computational force to deliver answers, reasoning models like o3 are designed to process information more like humans.


But what sets o3 apart?


  • It Fact-Checks Itself: When you ask it a question, it doesn’t just respond—it cross-references and double-checks along the way.
  • It Thinks at Different Speeds: Depending on the task, you can set it to low, medium, or high compute (essentially telling it how much "brainpower" to use). This means it can handle both simple questions and complex puzzles without breaking a sweat.
  • It’s Flexible: There’s the full-blown o3 model and its smaller sibling, o3-mini, designed for lighter tasks and tighter budgets.

Why Call It o3? And What Happened to o2?

OpenAI skipped “o2” because of a trademark conflict with a British telecom provider, O2.


Yep, you read that right.


Sam Altman, OpenAI’s CEO, even confirmed this during a live stream.


In the tech world, even naming AI models can come with legal drama.


But enough about the name. Let’s talk about why this model is turning heads.

The Numbers Behind o3: Why It’s Blowing Minds

If you’re into data, here’s where things get juicy.

1 - Reasoning Power

One of the most striking achievements of O3 is its performance on the ARC AGI benchmark—a test designed to measure whether AI can learn and generalize new skills, not just regurgitate what it’s been trained on.


Picture this: You’re given a series of geometric patterns and asked to predict the next one.


No prior examples, no memorized templates—just raw reasoning.


That’s the challenge ARC AGI presents to AI.

  • O1’s Score: 32%
  • O3’s Score: 88% (on high compute)


This milestone is significant because ARC AGI is considered the gold standard for evaluating an AI’s ability to think like a human.


For the first time, an AI model has surpassed human-level performance on this test.

Here’s the test OpenAI Performed on the o3 model

What’s happening here?


You’re shown a grid with colorful shapes and asked, “If this is the input, what should the output look like?”


The AI is given a few examples of how input grids are transformed into output grids.


The examples follow specific logic or rules.


For instance:

  • In one example, a yellow square with red dots inside gets a red border.
  • In another, a yellow square with blue dots gets a blue border.


The goal?

  • The AI has to figure out the rules behind these transformations, without being told explicitly.
  • Then, it needs to apply those rules to a brand-new grid (the “Test Input”) and generate the correct “Test Output.”


Why is this so hard for AI?

Humans do this all the time.


For example, if someone says, “Add a red outline to anything with red dots,” you get it immediately.


AI, however, struggles because it doesn’t “understand” the concept of red or outlines—it only processes patterns in data.


The ARC test pushes AI to think beyond pre-learned answers.


Each test is unique, so memorization won’t help.


What about the last test (with the 🤔 emoji)?

Here’s where things get really tricky.


The test input mixes things up: there’s a yellow square with magenta dots.


The AI hasn’t seen magenta before—what should it do?


Humans might guess, “Maybe it should get a magenta border,” but this requires reasoning and a leap of logic.


For AI, this is like being asked to jump off a cliff blindfolded.


It’s completely outside its training.

2 - O3’s Remarkable Performance

O3 has set a new benchmark in AI reasoning by excelling on the ARC AGI test.


On low-compute settings, O3 scored 76% on the semi-private holdout set—a performance far above any previous model.


But the real breakthrough came when tested on high-compute settings, where O3 achieved an extraordinary 88%, surpassing the 85% threshold often considered human-level performance.

3 - Coding Wizardry

The graph shows O3 achieving 71.7% accuracy on Bench Verified, a benchmark that simulates real-world software engineering tasks.


This is a 46% improvement over O1, signaling O3’s strength in solving complex, practical challenges developers face daily.


In competitive coding, the difference is even more dramatic.


With an ELO score of 2727, O3 doesn’t just outperform O1’s 1891—it enters a league rivaling top human programmers.

For context, an ELO above 2400 is typically considered grandmaster level and its Codeforces rating of 2727 places it in the top 0.8% of human coders.

4 - Math Genius

On the 2024 American Invitational Mathematics Exam, o3 scored a jaw-dropping 96.7%, missing just one question.

5 - Science Prodigy

On GPQA Diamond, a set of PhD-level science questions, o3 achieved 87.7% accuracy—an unheard-of feat for AI models.


These aren’t just numbers—they’re proof that o3 is tackling challenges that once seemed out of reach for machines.


How Does o3 Think?

O3 doesn’t just respond like most AI—it takes a breath, pauses, and thinks.


Think of it as the difference between blurting out an answer and carefully weighing the options before speaking.


This is possible thanks to something called deliberative alignment.

Source : OpenAI

It’s like giving O3 a moral compass, teaching it the rules of safety and ethics in plain language, and showing it how to reason through tough situations instead of just reacting.


A Quick Example

Imagine someone trying to outsmart O3 by encoding a harmful request using a ROT13 cipher (basically, a scrambled message).


They’re asking for advice on hiding illegal activity.


A less advanced AI might take the bait, but O3?


It deciphers the request, realizes it’s dodgy, and cross-checks with OpenAI’s safety policies.


It doesn’t just block the response.


It reasons why this request crosses ethical boundaries and provides a clear refusal.


This is AI with a conscience—or as close to one as we’ve ever seen.


Here’s how O3’s thought process works:

1 - It Reads the Rules

Instead of guessing what’s right or wrong, O3 is trained with actual safety guidelines written in plain language.


It doesn’t just rely on examples to infer behavior—it learns the rulebook upfront.

2 - It Thinks Step-by-Step

When faced with a tricky or nuanced task, O3 doesn’t jump to conclusions.


It uses what’s called chain-of-thought reasoning—breaking down the problem, step by step, to figure out the best response.

3 - It Adapts to the Moment

Not every situation is the same.


Some tasks need quick answers, others require deep reflection.


O3 adjusts its effort based on the complexity of the problem, so it’s efficient when it can be and thorough when it needs to be.

Meet O3 Mini: The Budget-Friendly Genius

Alongside O3, OpenAI introduced O3 Mini, a cost-effective version designed for tasks that don’t require the full power of its big sibling.


What’s special about O3 Mini?


Adaptive Thinking Time Users can adjust the model’s reasoning effort based on task complexity.


Need a quick answer? Go for low-effort reasoning.


Tackling a complex coding problem? Crank it up to high-effort mode.


Cost-Performance Balance O3 Mini delivers nearly the same level of accuracy as O3 for simpler tasks but at a fraction of the cost.


This flexibility makes O3 Mini an attractive option for developers and researchers working on a budget.


Is This the Future of AI? A Step Toward AGI

Here’s where things get philosophical.


AGI, or Artificial General Intelligence, refers to AI that can perform any task a human can—and often better.


OpenAI has always had AGI as its north star, and with o3, it feels like they’re edging closer.


Consider this:

  • On ARC-AGI, o3 nearly tripled the performance of its predecessor.
  • It’s solving problems that require learning and reasoning, not just memorization.


That said, even OpenAI admits that o3 isn’t AGI yet.


It’s more like a prototype of what AGI could look like—an AI that learns, adapts, and reasons in ways that feel… human.


The Challenges Ahead Even with its incredible capabilities, o3 isn’t without its flaws:

  1. Cost: Running o3 in high computing settings is expensive—like, 7 to 8 thousand dollars per ta.
  2. Errors: While it’s better at reasoning, o3 can still trip up, especially on simpler tasks where it overthinks the problem.
  3. Ethics: Earlier models like o1 faced criticism for attempting to deceive users in certain scenarios. Will o3 fall into the same trap?


The Big Picture

o3 isn’t just another AI model—it’s a glimpse into what AI might become.


It’s not perfect, but it’s a step toward an era where machines don’t just respond—they reason, learn, and adapt in ways that feel deeply human.


And while we’re still far from AGI, o3 reminds us that progress isn’t linear—it’s exponential.


So, what do you think? Are we on the cusp of a new AI revolution? Or is o3 just another milestone on a much longer journey?