After practically two weeks of bulletins, OpenAI capped off its 12 Days of OpenAI livestream collection with a preview of its next-generation frontier mannequin. “Out of respect for friends at Telefónica (owner of the O2 cellular network in Europe), and in the grand tradition of OpenAI being really, truly bad at names, it’s called o3,” OpenAI CEO Sam Altman instructed these watching the announcement on YouTube.
The brand new mannequin isn’t prepared for public use simply but. As a substitute, OpenAI is first making o3 out there to researchers who need assist with security testing. OpenAI additionally introduced the existence of o3-mini. Altman stated the corporate plans to launch that mannequin “around the end of January,” with o3 following “shortly after that.”
As you may count on, o3 affords improved efficiency over its predecessor, however simply how a lot better it’s than o1 is the headline function right here. For instance, when put by means of this yr’s American Invitational Arithmetic Examination, o3 achieved an accuracy rating of 96.7 %. In contrast, o1 earned a extra modest 83.3 % score. “What this signifies is that o3 often misses just one question,” stated Mark Chen, senior vp of analysis at OpenAI. In reality, o3 did so effectively on the standard suite of benchmarks OpenAI places its fashions by means of that the corporate needed to discover tougher exams to benchmark it in opposition to.
A kind of is ARC-AGI, a benchmark that exams an AI algorithm’s skill to intuite and be taught on the spot. In response to the take a look at’s creator, the non-profit ARC Prize, an AI system that might efficiently beat ARC-AGI would symbolize “an important milestone toward artificial general intelligence.” Since its debut in 2019, no AI mannequin has crushed ARC-AGI. The take a look at consists of input-output questions that most individuals can determine intuitively. As an illustration, within the instance above, the right reply can be to create squares out of the 4 polyominos utilizing darkish blue blocks.
On its low-compute setting, o3 scored 75.7 % on the take a look at. With extra processing energy, the mannequin achieved a score of 87.5 %. “Human performance is comparable at 85 percent threshold, so being above this is a major milestone,” in line with Greg Kamradt, president of ARC Prize Basis.
OpenAI additionally confirmed off o3-mini. The brand new mannequin makes use of OpenAI’s not too long ago introduced Adaptive Pondering Time API to supply three totally different reasoning modes: Low, Medium and Excessive. In apply, this enables customers to regulate how lengthy the software program “thinks” about an issue earlier than delivering a solution. As you may see from the above graph, o3-mini can obtain outcomes similar to OpenAI’s present o1 reasoning mannequin, however at a fraction of the compute price. As talked about, o3-mini will arrive for public use forward of o3.