OpenAI, the makers of the ChatGPT has come up with a new AI model that is set to revolutionise the world and bring another huge change in the digital space. GPT-4o is the newest invention of the OpenAI which reasons across text, voice and vision. Mira Murati, the Chief Technology Officer of OpenAI said in a livestream that GPT-4o is going to be offered free because it is much more efficient than the previous AI models launched by the company.
OpenAI has announced a gradual rollout of GPT-4o’s capabilities, with text and image functions becoming available in ChatGPT beginning 13 May 2024.
According to OpenAI CEO Sam Altman, GPT-4o is “natively multimodal,” which means it can understand and generate content using voice, text, or images. Altman also stated that developers can use the GPT-4o API, which provides improved efficiency at half the cost and twice the speed of GPT-4 Turbo.
In a recent update, ChatGPT’s voice mode will gain new features similar to a Her-like voice assistant, including real-time responses and contextual awareness. Unlike the current version, which only responds to single prompts and relies on audio input, this update will allow the app to interact dynamically with its surroundings.
Also read: 10 Better AI Code Writers Than ChatGPT
Altman reflected on OpenAI’s journey in a blog post following the livestream event, noting a shift in vision from creating direct benefits to allowing others to create AI models using paid APIs. He emphasized a renewed emphasis on facilitating innovation through increased access to AI technology.
Leading up to 13 May’s GPT-4o launch, there were conflicting reports about OpenAI’s announcements, ranging from the possibility of an AI search engine to rival Google, the integration of a voice assistant named Perplexity into GPT-4, or the unveiling of an entirely new and improved model, GPT-5. OpenAI strategically timed this launch just before Google I/O, the tech giant’s flagship conference, where we expect the Gemini team to release a number of AI products.