What happens when AI queries return false allegations of sexual assault, insurrection, and worse? Where does one go to restore a reputation?
Thus far in the nascent era of artificial intelligence, the courts have not offered much sympathy for such claims. Conservative activist Robby Starbuck wants to change that, after getting libeled by outputs from Google AI products such as Bard and Gemini. He filed a defamation lawsuit against Google, seeking at least $15 million in damages:
Starbuck said he became aware of the inaccuracies in 2023 while using Bard, an early Google AI tool. Bard said that Starbuck had ties to Richard Spencer, a once-prominent white nationalist, according to the lawsuit. At the time, Starbuck took to social-media platform X and tagged Google and its CEO in a post about the details:
“Imagine a future where Bard is used to decide whether you get a loan, if you’re approved for adoption,” he asked his hundreds of thousands of followers at the time. The lawsuit says newer Google AI tools produced other falsehoods about him earlier this year, including claims that Starbuck had been accused of sexual assault.
Starbuck actually did have success with an earlier lawsuit involving Meta AI, the WSJ notes. That AI platform falsely alleged that Starbuck took part in the January 6 riot, leading to a similar action. Meta settled that out of court privately, but Meta notably brought Starbuck on as an “adviser,” where he could pursue his primary activism as an opponent of corporate DEI policies.
In the early moments, Google appears ready to fight this claim. Their spokesperson told the media that the issue was largely with the earlier Bard platform, and then suggested that the inaccurate data was the result of a contrived search:
Inaccurate information is a “well-known issue for all LLMs, which we disclose and work hard to minimize,” said [Alphabet spokesman José] Castañeda, using the acronym for large language model, a type of AI data system that is used to create services such as Bard or ChatGPT. “If you’re creative enough, you can prompt a chatbot to say something misleading.”
Inaccurate information is a well-known issue for tabloids, too. It doesn’t make them immune to defamation or libel actions. The onus for accuracy is on the publisher, not the subscribers or the sources for that material.
Google may have a better shield with Sullivan. Starbuck is undeniably a public figure by his own choice of the kind that any defamation or libel claim would have to show “actual malice” as a hurdle to get any favorable judgment. That generally means that the publisher had good reason to know that the information was false, and/or exercised gross negligence toward the issue to a point that justifies a finding of “actual malice.” Is that actually possible with AI?
Actually, Castañeda may have inadvertently made Starbuck’s case:
Starbuck’s suit alleges that Gemma, an open AI model, and Gemini, one of Google’s primary consumer-facing AI systems, asserted earlier this year that Starbuck has been accused of sexual assault and that he had participated in the Jan. 6 riot. Gemma, according to the suit, listed false media links as sources for those claims.
“We will review the complaint when we receive it,” said the Google spokesman. The Gemma model is intended for developers to customize and build, while Gemini is a consumer-facing application, he said.
If the issue of false data were known by Alphabet/Google in 2023 and supposedly addressed at that time, why did their later AI engines generate false allegations against Starbuck as well? That sounds like a pretty good argument for some form of negligence in a “consumer-facing application,” and perhaps enough to get the attention of a judge in terms of damages. If Google knew in Bard that its AI platforms spit out potentially libelous material and didn’t correct it sufficiently in Gemma and Gemini, well, that may be a real problem in court.
Thus far, no one has won such an action against an AI platform, but the issues with AI and its GIGO qualities are beginning to become apparent, especially in the legal industry. Law firms have gotten burned by false case citations created by lazy attorneys who got scorched in court over incompetent and perjurious presentations. At some point, judges will lose any sympathy for AI platforms in general.
I suspect the likeliest outcome in this case will be that Alphabet hires Starbuck as an adviser, as Meta did, and provides a settlement attached to an NDA. The long-term lesson here will be that AI poses more risks for its operators than just potential copyright violations, and that those risks may require these platforms to slow down and work on quality rather than marketing.
Editor’s Note: Every single day, here at Hot Air, we will stand up and FIGHT, FIGHT, FIGHT against the radical left and deliver the conservative reporting our readers deserve.
Help us continue to tell the truth about the Trump administration and its successes. Join Hot Air VIP and use promo code POTUS47 to get 74% off your VIP membership!
Read the full article here