Actress Scarlett Johansson has accused OpenAI, a leading artificial intelligence start-up, of mimicking her voice for their AI chatbot, despite her refusal to grant them permission to use her voice. Johansson was approached by OpenAI’s CEO, Sam Altman, with a licensing offer for her voice, which she declined. However, shortly after her refusal, the company unveiled a demo featuring a voice uncannily similar to hers, known as “Sky.”
Johansson expressed her shock and anger upon hearing the demo, stating that the resemblance was so striking that even her close friends and news outlets could not tell the difference. She also highlighted a tweet from Altman that seemed to insinuate intentional similarity to the character she voiced in the movie “Her.”
Following the controversy, OpenAI suspended the Sky voice and insisted that the voice was not intended to resemble Johansson’s. The company claimed to have cast the voice actor for Sky before any outreach to Johansson. However, such actions have prompted Johansson to consider legal action against OpenAI.
This situation also brings to light the existing legal loopholes regarding the protection of an individual’s voice from AI replication. Johansson stressed the need for transparency and appropriate legislation in dealing with issues related to deepfakes and the protection of individuals’ identities and work.
OpenAI is currently facing multiple legal challenges, including allegations of violating copyright laws by using content from authors and news organizations to train its AI models. The company’s actions have raised concerns about the ethical and legal implications of AI technology as it continues to advance.
As the world grapples with the potential misuse of AI and deepfake technology, Scarlett Johansson’s case serves as a reminder of the importance of establishing clear regulations and safeguards to protect individuals from unauthorized use of their likeness and voice in AI applications.