Open AI introduces GPT-4o model

Open AI introduces GPT-4o model
Sam Altman’s Open AI introduces GPT-4o model. Photo Courtesy: Open AI website

Open AI announced a major update on Monday when the tech firm introduced its latest model named GPT-4o.

The new announcement was made during OpenAI Spring Update event which was hosted by company’s CTO Mira Murati.

Company chief Sam Altman said in a statement: “First, a key part of our mission is to put very capable AI tools in the hands of people for free (or at a great price). I am very proud that we’ve made the best model in the world available for free in ChatGPT, without ads or anything like that.”

“Second, the new voice (and video) mode is the best computer interface I’ve ever used. It feels like AI from the movies; and it’s still a bit surprising to me that it’s real. Getting to human-level response times and expressiveness turns out to be a big change.

“The original ChatGPT showed a hint of what was possible with language interfaces; this new thing feels viscerally different. It is fast, smart, fun, natural, and helpful,” he said.

The company said developers can also now access GPT-4o in the API as a text and vision model. GPT-4o is 2x faster, half the price, and has 5x higher rate limits compared to GPT-4 Turbo.

Prior to GPT-4o, one could use Voice Mode to talk to ChatGPT with latencies of 2.8 seconds (GPT-3.5) and 5.4 seconds (GPT-4) on average.

“To achieve this, Voice Mode is a pipeline of three separate models: one simple model transcribes audio to text, GPT-3.5 or GPT-4 takes in text and outputs text, and a third simple model converts that text back to audio,” read Open AI website.