ChatGPT unexpectedly began speaking in a user’s cloned voice during testing

OpenAI released a system card for its ChatGPT’s new GPT-4o AI model, outlining limitations and safety procedures. The document reveals that during testing, in rare cases, the model’s Advanced Voice Mode unintentionally imitated users’ voices, a potential security concern OpenAI already has safeguards against. This feature offers spoken conversations with the AI assistant. The system card details an instance of unauthorized voice generation where a noisy input made the model imitate a user’s voice. OpenAI uses voice samples to guide safe voice imitation, with an output classifier to detect unauthorized audio generation. A BuzzFeed data scientist joked that this incident could be a Black Mirror plot, highlighting the potential creepiness of AI imitating voices.

https://arstechnica.com/information-technology/2024/08/chatgpt-unexpectedly-began-speaking-in-a-users-cloned-voice-during-testing/

To top