George Carlin’s estate settles lawsuit over AI Special

Photo of author


A settlement has been reached between the estate of George Carlin and the podcast’s producers, who used generative artificial intelligence to impersonate the late standup comic’s voice and style for an unauthorized special.

Will Sasso and Chad Kultzen, hosts of the podcast friends, and the estate of George Carlin notified the court on Tuesday of an agreement to resolve the case. Josh Schiller, an attorney for the estate, says that under the deal, an injunction will be filed to bar further use of the video, which has already been removed, and that it was made in violation of the comic’s rights. Further terms of the agreement were not disclosed. Schiller declined to comment on whether there were monetary damages.

The settlement is believed to be the first settlement of a lawsuit over misuse of a celebrity’s voice or likeness using AI tools. It comes as Hollywood is raising the alarm over the use of the technology to exploit the personal brands of actors, musicians and comics without consent or compensation.

“It sends a message that you have to be very careful about using AI technology and respect people’s hard work and goodwill,” says Schiller. He further said that the deal “will serve as a blueprint to resolve similar disputes going forward where the rights of an artist or public figure are violated by AI technology.”

George Carlin’s daughter, writer and producer Kelly Carlin, said in a statement that “This case serves as a warning about the dangers posed by AI technologies and not just to artists and creators, but to every human being on Earth.” Appropriate security measures are required.”

Legal battle stems from one-hour special titled George Carlin: I’m glad I’m dead, which was released on the podcast’s YouTube channel in January. In the episode, an AI-generated George Carlin, emulating the comedian’s signature style and cadence, narrates commentary on AI-generated images and tackles modern topics such as reality TV, streaming services, and the spread of AI.

The podcast has been described as “the first media experiment of its kind”, with the show’s premise revolving around using an AI program called “Dudessy AI” – which captures much of the host’s personal information, including text messages, social media, Have access to records. Accounts and browsing history – to write episodes in the style of Sasso and Kultzen.

Schiller says the podcasters contacted George Carlin’s estate with an offer to remove the video and not republish it on any platform. He adds, “We wanted to move on from this quickly and respectfully [Carlin’s] Inheritance and restore it by getting rid of it.

The lawsuit alleged copyright infringement for unauthorized use of the comedian’s copyrighted works.

At the beginning of the video, it is explained that the AI ​​program it created specifically incorporated five decades of George Carlin’s original stand-up routines, which are owned by the comedian’s estate, as training material.

The complaint also alleged violations of publicity laws for the use of George Carlin’s name and likeness. This led to the special being promoted as an AI-generated George Carlin installment, where the dead comedian was “resurrected” with the use of AI tools.

What was special was not that Dudesi was the first to use AI to impersonate a celebrity. Last year, Sasso and Kaltgen released an episode featuring an AI-generated Tom Brady performing a stand-up routine. It was lifted after both received cease-fire letters.

In the absence of federal laws covering the use of AI to mimic a person’s appearance or voice, a raft of state laws have filled the void. Still, there is little recourse for those in states that have not passed such protections, which has prompted lobbying from Hollywood.

This prompted a bipartisan coalition of House lawmakers to introduce a long-awaited bill in January to crack down on the publication and distribution of unauthorized digital reproductions, including deepfakes and voice clones. The purpose of the law is to give individuals the exclusive right to approve the use of their image, voice and visual likeness by granting them intellectual property rights at the federal level. Under the bill, unauthorized uses would be subject to harsh penalties and would be open to litigation by any individual or group whose exclusive rights are affected.

In March, Tennessee became the first state to pass a law specifically protecting musicians from the unauthorized use of AI to copy their sound without permission. The Ensuring Equality Voice and Image Protection Act, or ELVIS Act, builds on an older right to state publicity law by adding a person’s “voice” to the scope of what it protects. California has not yet updated its law.

Tuesday’s deal comes as OpenAI prepares to launch a new tool that can recreate a person’s voice from a 15-second recording. When given a recording and text, it can read that text back in voice from the recording. The organization, led by Sam Altman, said it was not releasing the technology to better understand the potential harms, such as using it to impersonate people to spread misinformation and facilitate scams.

Amid the rise in AI voice mimicry tools, there is an ongoing discussion over whether platforms hosting infringing content should be subject to liability. Under the Digital Millennium Copyright Act, platforms like YouTube can take advantage of certain safe harbor provisions as long as they take certain steps to remove such potentially infringing material. Artist advocacy groups have demanded amendments to the law.

“This is not a problem that will go away on its own,” Schiller said in a statement. “This must be met with swift, forceful action in the courts and the AI ​​software companies whose technology is being weaponized must also bear some degree of accountability.”

Leave a comment