OpenAI Introduces Voice-Cloning Tool
OpenAI recently revealed a voice-cloning tool named “Voice Engine” that has been designed with strict controls to prevent the spread of audio fakes aimed at deceiving listeners. The tool is capable of replicating someone’s speech based on a mere 15-second audio sample, as detailed in a recent blog post by OpenAI.
Risks and Concerns
The company, based in San Francisco, acknowledged the serious risks associated with creating speech that mimics individuals’ voices, particularly in the context of an election year. OpenAI emphasized the importance of addressing these risks and engaging with various stakeholders to gather feedback and ensure responsible development of the technology.
Mitigating Misuse of AI-Powered Tools
Disinformation researchers have raised concerns about the potential misuse of AI-powered applications, especially in crucial election periods. Voice-cloning tools, which are becoming increasingly accessible, pose a significant challenge due to their affordability, user-friendliness, and difficulty in detection.
- OpenAI is approaching the release of Voice Engine cautiously, considering the risks associated with synthetic voice manipulation.
- Partners involved in testing the tool have agreed to strict guidelines, including obtaining explicit consent from individuals whose voices are replicated.
- The company is committed to transparency by ensuring that audiences are informed when they are listening to AI-generated voices.
Addressing Misinformation Concerns
A notable incident involving a political consultant impersonating President Joe Biden with an AI-generated robocall has underscored the urgency of addressing deepfake disinformation. Experts are concerned about the potential impact of such technology on upcoming elections worldwide.
OpenAI has implemented safety measures, such as watermarking to track the source of generated audio and proactive monitoring of its usage, to mitigate the risks associated with Voice Engine.