OpenAI released a voice assistant to a small pool of paying subscribers, a service based on its advanced GPT-4o model which is said to be capable of replicating natural human interaction.

In a series of posts on social media, the company explained it started rolling out the Advanced Voice Mode to a minority of GPT Plus users, adding it expects all subscribers under this category to have access to the new product by September.

Video and screen sharing capabilities will complement the voice mode, OpenAI said.

The developer noted GPT-4o’s voice capabilities had been tested with more than one hundred external security experts across 45 languages. The model has been trained to speak in four preset voices to “protect people’s privacy”.

It has also put a system in place to block outputs outside of default voices and to prevent violent or copyrighted content.

The Advanced Voice Mode offers “more natural real-time conversations, allows you to interrupt anytime, and senses and responds to your emotions”, OpenAI claimed.

In May, OpenAI withdrew voice assistant Sky from the market following accusations it replicated the voice of actress Scarlett Johansson without consent.

Recently, the company launched a new prototype for AI search features, SearchGPT, with a plan for the service to be integrated with its ChatGPT chatbot. It had also developed an AI video generator named Sora.