With OpenAI’s Voice Engine promising to convincingly replicate an individual’s speech from just a 15-second clip, the focus on AI regulation and legal challenges to its operation are intensifying.
While the astonishing progress toward photorealistic generative video from OpenAI’s Sora has been getting an enormous amount of attention, behind the scenes there are a lot of legal battles under way. These involve most of the biggest players in the field of generative AI, including nVidia and Microsoft, now owner of OpenAI, and involve allegations of both copyright violations and of defamation.
There are several copyright lawsuits underway presently. Here’s a quick summary.
A group of book authors are alleging that nVidia used pirated copies of their books in its platform NeMo to train custom chatbots. They are seeking damages from lost income and to force nVidia to destroy all copies of the dataset containing their pirated works.
OpenAI is facing several similar suits, though the plantiffs there, including the New York Times and several well known authors including Sarah Silverman and Christopher Golden, are saying that they have evidence that ChatGPT is directly copying copyrighted books for training. The NY Times has also alleged that ChatGPT would actually repeat direct copies of copyrighted content from the NYT, effectively giving users a way around the NYT paywall.
Google faced a similar copyright suit when it launched its book search, and defended itself by proving that Google would only deliver snippets to search users, thus encouraging book sales rather than depriving authors of sales revenue. The difference here is that the Times says that ChatGPT regurgitated several paragraphs of NYT articles in a chat. Essentially, the Times is alleging that OpenAI stole and reproduced copyrighted works.
It is telling that in its response filing, OpenAI does not dispute the Times' claim that OpenAI copied millions of the NYT's works to train its AI without permission.
Hallucinatory experiences
The Times also provided examples of some ChatGPT hallucinations, generating fake articles which appear realistic, which has lead to another suit.
Hallucinations are not a new phenomenon; lawyers and students alike have been caught using AI-generated text that turned out to be false; some lawyers even filed papers in court citing cases that an AI chatbot simply invented at his behest. Whether or not that lawyer knew beforehand that the cases cited were fictional,
Hallucinations have also led to another more insidious issue.
An AI chatbot cost Air Canada money when it misled a passenger, telling him that he could buy his plane ticket and then apply for a bereavement fare after the funeral. That contradicted Air Canada's official policy of not allowing refunds after travel, but the company lost the case in small claims court and had to pay the refund.
Some other hallucinations have been outright defamatory, such as when ChatGPT falsely claimed that the Australian regional mayor Brian Hood was a criminal. He had his lawyer give OpenAI 28 days to clean up the lies or face a law suit for the defamation. OpenAI filtered the false statements that time.
Some hallucinations have been even more deleterious, and lead to law suits against Microsoft for defamation. One is from an author who discovered that Bing search and Bing chat falsely labeled him as a convicted terrorist, ruining his reputation and costing millions in revenue from sales of his book. Elsewhere, a radio host sued OpenAI alleging that ChatGPT falsely labeled him as charged with embezzlement.
Some AI companies are working on the hallucinations issue, such as nVidia's NeMo Guardails software that looks to prevent chatbots from publishing false statement, but it's effectiveness is an open question. It appears to rely on prior knowledge of prompts that generate defamatory responses, which could turn defamation filtering a game of whack-a-mole.
There are other solutions in development for preventing chatbots from engaging in these types of overt character assassination, such as detecting linguistic patterns common to defamatory statements in order to filter them out of chatbot outputs.However, it is still not able to fact check such statements, which remains a problem.
The ongoing and likely fallout
While the hallucination-driven defamation issue might be solved with technology, the copyright issue still looms large over the AI industry. The copyright lawsuits facing nVidia and OpenAI are ongoing, and the outcome far from certain. The fines should the plaintiffs win could be as high as $150,000 per violation, and potentially go so far as to force OpenAI to rebuild its training dataset from scratch, a costly endeavour.
However, even in the unlikely event that these lawsuits end in total victories for the plaintiffs, the overall impact to the AI industry will be relatively small. The industry is huge, and public facing generative AIs are a relatively small part of the industry. Given how much more computing power is available now even retraining their models from scratch would not take all that long any more. Most likely the outcome will be some fines, fees, and stricter licensing agreements.
These lawsuits though highlight the need for consistent regulation of A. Already politicians are misusing deepfakes to create fake campaign ads, and since it's become clear that in the modern disinformation age it's very easy to deceive the average netizen, the need for regulation is become more urgent by the day.
That said, the rate of advancement in AI is unprecedented; no other technology has shown such an astonishing pace in human history, so the odds that any government will be able to keep up are vanishingly small. On top of that, politicians are notoriously clueless when it comes to science and technology.
Lawsuits like these might in fact be the best chance we have of regulating AI.
Voice Engine epilogue…
Andy Stout writes: Rakesh submitted this article late last week. Over the weekend OpenAI announced that it had developed — but not released — its new Voice Engine model that can create natural sounding speech that closely resembles the original speaker from a 15-second clip.
Similar to how it introduced Sora all those weeks ago, it is not releasing it on the open market, acknowledging the sensitivities of such technology in an election year in the US and elsewhere. The state of New Hampshire passed legislation late last week after an incident in January involving fake Joe Biden robocalls. And while keen to highlight positive applications, such as helping early readers or non-verbal people, even OpenAI says that basically, now the technology exists, things such as voice-based authentication as a security measure for accessing bank accounts and other sensitive information are going to have to be phased out.
It also encourages “Accelerating the development and adoption of techniques for tracking the origin of audiovisual content, so it's always clear when you're interacting with a real person or with an AI,” alongside a raft of other considerations. Whether such a proactive mea culpa will prevent this technology being added to the mushrooming lawsuits though is doubtful.
Tags: Technology AI
Comments