Meet o1: OpenAI’s Game-Changing Model That Thinks (Almost) Like a Human!

OpenAI after a very long time introduced their new Ai model this time it is the o1 model—the first in what they’re calling their “reasoning” series. This new AI powerhouse? It’s built to tackle the tough stuff—complex questions but at the cost of speed. Oh, and if you’ve heard the whispers, you’re right.

This is the much-hyped “Strawberry” model everyone’s been buzzing about.

So What makes o1 special?

Well, it’s taking OpenAI a step closer to their ultimate goal: creating AI that can think like humans.

Howevver, For now, it’s pretty great at coding and solving tricky, multistep problems, but don’t expect it to come cheap or fast.

It’s slower and more expensive than GPT-4o, the current darling of the AI world. Right now, OpenAI’s calling this o1 release a “preview” to remind us that this thing is just getting started.

So, now the more important question who gets to play with o1 first?

If you’re a ChatGPT Plus or Team user, you can fire up both o1-preview and o1-mini (the cheaper, smaller version) starting today.

Enterprise and Edu users have to wait until next week, though. And while OpenAI promises o1-mini will eventually be available to free ChatGPT users, there’s no set timeline yet.

And in case if you belong to Developers community, you have to brace yourselves—it’s not cheap for you.

Using o1 through the API will set you back $15 per million input tokens and $60 for a million output tokens. Just for comparison, GPT-4o is way cheaper at $5 and $15, respectively.

But here’s where things get interesting: the training process for o1 is totally different from its predecessors.

According to OpenAI’s research lead, Jerry Tworek, they used a shiny new optimization algorithm and a fresh dataset specifically for o1. Instead of just mimicking patterns like its older siblings, o1 learns through reinforcement learning—which, in simple terms, means it gets better by learning what works and what doesn’t

It uses something called a “chain of thought,” much like how we tackle problems step by step.

The payoff? OpenAI claims o1 is more accurate than previous models and even “hallucinates” less—though they admit it’s not perfect yet.

When it comes to handle complex problems like math and coding, this model blows GPT-4o out of the water.

For instance, when tested against a qualifying exam for the International Mathematics Olympiad, GPT-4o only managed to solve 13% of the problems correctly. o1? A whopping 83%.

It’s not just in math where o1 shines. In online programming contests (think Codeforces), the model placed in the top 89% of participants. And OpenAI says their next update could have the model performing on par with PhD students in physics, chemistry, and biology. How wild is that?

But, before you get too excited, o1 isn’t perfect, as stated by its developers.

It lags behind GPT-4o when it comes to general knowledge about the world.

Plus, it can’t browse the web or process files and images yet.

But even with these drawbacks, OpenAI insists o1 is a glimpse into the future of AI, with the name symbolizing a “reset”.

Rahul Bodana is a News Writer delivering timely, accurate, and compelling stories that keep readers informed and engaged.