Canadian fiddler sues Google after AI Overview claimed he was a sex offender
Comments
Lockal
rdsnsca
US Freedom of speech will not work in Canadian courts, Google will have to prove, beyond a reasonable doubt, what the AI said is true. If they are smart they will settle out of court.
deeponey
Are they going to try to make a "we're just a platform, don't shoot the messenger" section 230 argument (not sure what the equivalent in Canada is) for the AI overviews they generate? Seems like a bridge too far. Really hopeful the courts will side with Ashley MacIsaac here, and set some sane precedent.
cactacea
There isn't one.
winocm
"AI can make mistakes, so double-check responses."
nerdsniper
FWIW, in Walters v OpenAI, a judge rejected that argument made in OpenAI's motion to dismiss [0]. The case ended up being ruled on different merits though (namely, that the user knew the statements were a hallucination so there was no defamation).
> First, Riehl did not and could not reasonably read ChatGPT’s output as defamatory. By its very nature, AI-generated content is probabilistic and not always factual, and there is near universal consensus that responsible use of AI includes fact-checking prompted outputs before using or sharing them. OpenAI clearly and consistently conveys these limitations to its users. Immediately below the text box where users enter prompts, OpenAI warns: “ChatGPT may produce inaccurate information about people, places, or facts.” Before using ChatGPT, users agree that ChatGPT is a tool to generate “draft language,” and that they must verify, revise, and “take ultimate responsibility for the content being published.” And upon logging into ChatGPT, users are again warned “the system may occasionally generate misleading or incorrect information and produce offensive content. It is not intended to give advice.”
Separately, it's broadly correct that there is no Section 230 argument to be made. "Everyone" knows that Section 230 doesn't apply to this. I can't find anyone making any legal arguments that it would.
0: https://storage.courtlistener.com/recap/gov.uscourts.gand.31...
chrisjj
> Google should not have lesser liability because the defamatory statements were published by software that Google created and controls.
Therein lies the rub. Google does not control what its parrot spouts. No-one does.
nerdsniper
For defamatory statements about public figures, "actual malice" is a necessary component of defamation. For private individuals, plaintiffs just have to prove "negligence", that Google didn't act with reasonable care before publishing. It's unclear whether courts would find negligence, but a decent lawyer would argue something like: "By explicitly stating in their disclaimer that Google knows some of the information they are publishing might be inaccurate, they are actively demonstrating that they did not verify the claims - and therefore willfully acted with reckless disregard for the truth."
This is exactly why Google's public comment on this case from the TFA is:
> "AI Overviews frequently improve to show the most helpful information, and we invest significantly in the quality of responses. When issues arise – like if our features misinterpret web content or miss some context – we use those examples to improve our systems and may take action under our policies."
Google's statement is carefully crafted to make the case that they "act with reasonable care" for legal effect, rather than to win any points in the court of public opinion. Courts have yet to determine what passes the reasonable-care test for negligence wrt AI output. Google feels they need to make sure that regardless of anything else that happens in this case, that the decision does not find their publishing was negligent.
digitalPhonix
Even if you accept that (I don't, and neither should the courts), Google controls the next hundred processing/routing/rendering/middleware steps and is fully in control of the content that makes it to the user.
mft_
If Anthropic can implement a regular expression to monitor for user frustration, Google have certainty got the chops to have some sort of heuristic to check for strongly negative statements.
apothegm
Or even have a small model check the output of the larger one.
Doesn’t work with APIs, but then the person/entity integrating the API should have that responsibility.
thrownthatway
That’s one perspective.
It’s wrong.
But it’s definitely a perspective.
BizarroLand
Parents have to pay penalties when their underaged children burn down a building.
Companies that get treated with the rights of people should also have the responsibilities of people. Google designed, built, hosted, and promoted their LLM prominently. Logically, it follows that they should be personally and financially responsible for any harms their LLM causes.
chrisjj
Sure they should have the responsibility. Even more so given they don't have control.
grouchomarx
ah well, no worries then
1attice
This is especially troubling from a sociological perspective, as it points to how AIs turn malice into false history.
Ashley MacIsaac made waves in the nineties for being openly gay, and he paid his dues for years. I vividly recall being around a barroom table in the late nineties, listening to this specific slander. We knew it was slander though, because there was no evidence. We had no machine yet to confabulate it.
This is what we anglos do to our men who prefer men. We did it with Wilde, and with Turing, and we did it with MacIsaac, and we are doing it even harder in 2026 than in 1996, because what we called freedom is now called "woke", and what was called dictatorship is now called "freedom".
And you're next, dear reader.
Mix up with https://en.wikipedia.org/wiki/Al_MacIsaac? I think Ashley MacIsaac will have a hard time trying to proof that Alphabet wanted to cause harm and defame. Practically, without SEO, Google tends to index first sections of Wikipedia articles - and that's all. For example, many people are unlucky to have surnames of well known serial killers, and it is impossible to outplay the common nature.