This year will see the launch of more and more native artificial intelligence (AI) devices and smart glasses, all of which are controlled by voice. If you regularly use Siri and Gemini, you will surely admit that the conversation modes of these two virtual assistants still sound robotic and not like real humans. The Information reported this morning that OpenAI is developing a new AI audio model that will allow for more natural conversations.
The new AI model will launch around March with a new architecture. Development is led by Kundan Kumar, who previously worked at Character.AI Inc. With this new model, not only will conversations be more natural, but OpenAI's eye assistant will be able to conduct continuous two-way conversations more smoothly
The effort to develop this better AI audio model began because OpenAI's first physical product will rely entirely on voice commands. Some say it will be like AirPods without a screen, smart glasses and even reports say it will be a smart pen. What is certain is that it will not have a screen that requires the owner to type.
