The research organization released the Open AI Chat GPT launch in an X post on Tuesday.
Open AI has postponed the release of the advanced voice conversation feature, as it had to be rolled out starting in June. Open AI claimed that they needed more time, and delayed it until the end of July.
The Open AI ChatGPT 4 technology news today indicated that the audio feature will enable users to talk to ChatGPT and get live answers without an excessive wait time. Users will also be able to stop the Open AI ChatGPT 4 voice while it speaks. These two ChatGPT4 features, which are highly related to real-life dialogues, have proven to be a dare for AI assistants until now.
Open AI ChatGPT 4 voice was first exhibited this May when it stunned the public with its quick answers and spooky similarity to a real human voice. The Open AI ChatGPT voice was called Sky and sounds like the actress Scarlett Johansson’s voice, which was behind the artificial assistant from the “Her” movie.
Not long after the Open AI ChatGPT 4 presentation, the actress claimed that she had turned down multiple requests from the Open AI CEO Sam Altman regarding the use of her voice. The company denied using her voice in the Open AI Chat GPT 4 collaboration, but Johansson employed a lawyer to defend her rights. After this episode, Open AI and Chat GPT removed the voice and postponed the launch of the feature to ensure that all safety measures were taken.
In the technology news today, we learn that Open AI Chat GPT’s Advanced Voice Mode will be different from the Voice Mode which is currently accessible. Open AI ChatGPT previous audio feature provided three different models to the users: one to transform their voice into text, ChatGPT 4 to elaborate your prompt, and the last one, to turn ChatGPT 4 writing into voice.
The new Open AI Chat GPT update will be multimodal, meaning that will be able to process these tasks without involving any auxiliary models. This will cause a much lower response time in conversations. Open AI ChatGPT also claims that the app will understand emotional cadences in the user’s voice, including excitement, sadness, joy, or even singing.