Police relied on Clearview AI and put the wrong person in jail

480 points
1/20/1970
a year ago
by danso

Comments


danesparza

Oh, this is simple. Clearview AI needs to get sued for defamation or slander.

In the United States, falsely accusing someone can be considered defamation or slander, depending on the circumstances.

Defamation is a legal term that refers to the act of making false statements about someone that damage their reputation. If the false statements are made in writing, such as in a blog post or social media post, it is called libel. If the false statements are made verbally, it is called slander.

To prove defamation or slander, the person who was falsely accused must demonstrate that the statements were false (he can), that they were published or spoken to a third party (they were -- to the police department), that they caused harm to the person's reputation (he lost a week from work and was put in jail -- not to mention countless articles that mention this fact), and that the person making the false statements acted with actual malice or negligence (they provided a service for money and didn't check their facts). Actual malice means that the person making the false statements knew they were false or acted with reckless disregard for the truth. I'm pretty sure 'reckless disregard for the truth' would be pretty easy to prove in this case -- considering Clearview probably can't say specifically why this person selected for arrest.

If a person is found guilty of defamation or slander, they may be required to pay damages to the person who was falsely accused to compensate for the harm caused to their reputation. The amount of damages can vary depending on the extent of the harm and the specific circumstances of the case.

a year ago

tantalor

It's going to be really easy for them to weasel out by claiming the match was like "99% confidence" or something, so not actually false per se. This is supported by the facts: "one of the alleged fraudsters looked like Mr. Reid"

They can also claim their technology does not make an accusation, it provides a similarity score that LEO can use in their investigations. This is also supported by the facts: sheriff’s officer insisted it was a "positive match".

The sheriff's officer actually gives up the game here, revealing they improperly relied on the similarity score to deduce a suspects guilt, but an actually competent prosecutor would know better. The fault lies with the LEO in this case. Good luck suing them.

a year ago

konschubert

The problem is that people's priors for "is this the same guy?" are not normalised to "we have used an AI to scan the whole country for lookalikes".

a year ago

tantalor

Police already know about this. It's called a "dragnet".

https://en.wikipedia.org/wiki/Dragnet_(policing)

Since the 1950s, such "dragnets" have generally been held to be unconstitutional as unreasonable search and seizure actions.

Related: https://en.wikipedia.org/wiki/Reverse_search_warrant

a year ago

Teever

Is there really that much of a difference, legally speaking between the statements "I'm pretty sure that tantalor is the zodiac killer", or "I'm 99% sure that tantalor is the zodiac killer", or "tantalor is the zodiac killer?"

a year ago

mdp2021

> Is there really that much of a difference

Certainly. If a box outputs "this, M, is the best matching I have to A", it takes a sloppy fool to entail that "M is A".

If an OCR says that it sees some specific characters, that is not a "certainty obligation" - it is just a "best effort" through reasonable algorithms and imperfect data.

You are obliged to check. The fact that some people - and workers, and with responsibility - can be confused on those basics shows the urgency on rebuilding "common sense", a civic upgrade to "good sense", in society.

a year ago

[deleted]
a year ago

notfed

I want to say it's silly to blame the AI, or company here, because humans should ultimately be the judge of whether two faces match.

But I must admit, facial recognition instantaneously brings millions of faces onto the suspect list that never would have been considered. Two chances increase: the chance of finding the true suspect and the chance of finding an unfortunate look-alike.

a year ago

soraminazuki

It's silly to let Clearview AI off the hook here. This is a company that's hoarding personal information it scraped from the internet and selling it without consent. It's mainly being used to let law enforcement agencies evade privacy rules, frame and screw over people like what happened in this instance. This sort of behavior has no place in a society governed by the rule of law.

It's also worth noting that this company has been criticized many times in the past, but nothing has changed.

https://www.nytimes.com/2020/01/18/technology/clearview-priv...

a year ago

mdp2021

> humans should ultimately be the judge

Of course. This is confusion between tool and worker. It is not very far from leaving a screwdriver near the wood planks and expecting to find the cupboard built.

a year ago

darth_avocado

You also want to sue the police department separately for wrongful arrest to keep them accountable and discourage a use of tech like this.

a year ago

bigiain

I would throw a few bucks towards a gofundme to pay for the legal costs for suing _both_ police departments involved here. The one who sought and got an arrest warrant based purely on facial recognition, and the one who picked him up thousands of miles away and refused to even investigate whether the original warrant could possibly apply when the guy they arrested had never even been to the city the warrant claimed he’d stolen in…

a year ago

[deleted]
a year ago

fortran77

You can't. "Qualified Immunity".

It would be nice, of course, if a good Qualified Immunity case hit our current SCOTUS. They may rule differently than past courts, if they're as principled as they purport to be.

a year ago

Clubber

Qualified Immunity only protects the individual. You can still sue the department or city.

a year ago

KennyBlanken

In what universe have they shown themselves to be "principled"? Thomas's behavior, his wife's behavior? The court reversing a lot of prior decisions? The purposeful leak of one of the most important cases in the last decade?

a year ago

fortran77

“As they purport to be”

a year ago

the_why_of_y

As usual, sarcasm doesn't work well on HN.

a year ago

hypersoar

My guess is that some other law or tort would be a better fit, but I'll note that the "actual malice" and "negligence" are different standards with the latter being a lower bar. The former only applies to public figures.

a year ago

maximilianroos

> Oh, this is simple.

If you're commenting on something you don't understand, maybe don't start out with this?

> I'm pretty sure 'reckless disregard for the truth' would be pretty easy to prove in this case -- considering Clearview probably can't say specifically why this person selected for arrest.

Sigh

a year ago

bigiain

> > Oh, this is simple.

> If you're commenting on something you don't understand, maybe don't start out with this?

HN is gonna get pretty damned quiet if techbros can’t immediately mansplain their opinions to each other about things they have no clue about. (“But but but, I work for FAANG! I passed their stupid gatekeeping recruitment quizzes! I’m _clearly_ the smartest guy in the room! Now, let me tell you all about epidemiology and airborne virus transmission. What? No, I studied CompSci not any of those boring soft sciences like medicine or biology. Anyway, what we need to do is…”).

;-)

a year ago

sgtnoodle

Yesterday in a work meeting, I qualified my thoughts on the present subject as the incoherent ramblings of an insane person. Oddly, that just seemed to make my coworkers take my ideas even more seriously.

a year ago

smcin

Excellent. Tell me, were the walls of that meeting room rubber? Did your audience seem... restrained?

a year ago

sgtnoodle

I was sitting in a room all alone, talking to the voices that sometimes come out of the warm inanimate box that I compulsively stare at while pushing complex, ever changing sequences of buttons on it in a futile attempt to find meaning. Why do you ask?

a year ago

smcin

If you mean "phone support manager", just say "phone support manager"... :D

a year ago

pas

... for us silent(ish) ignorants could you please explain what's wrong with the comment you have responded to? thank you!

a year ago

sseagull

It’s not even close to “reckless disregard for the truth”. Which isn’t even the right standard (reckless disregard for the truth/actual malice is the very high bar for defamation of public figures - regular folk have a lower bar).

Reckless disregard for the truth is like still spreading lies, even though people around you are telling you otherwise, or even in the face of obvious reasons it is not true.

Here, they believed a computer program that was promoted as being accurate. An average person would probably have no reason to strongly doubt it, especially if it has been correct before. If they were acting with their best intentions, based in the “facts” they had before them, defamation would difficult to prove.

a year ago

[deleted]
a year ago

Aunche

By the same standard, a witness who misidentified a suspect would be guilty of defamation as well.

a year ago

pas

if the conditions were so bad that claiming that the suspect was so and so is absolutely unrealistic, isn't that reckless? eg. a psychic claiming they saw the suspect from 1000 miles?

a year ago

fortran77

I'm not sure this strategy will work. See Popehat on defamation or slander

https://popehat.substack.com/p/can-a-tarot-card-reading-be-d...

a year ago

im3w1l

I don't think it's nearly that simple. Do you want the same for other techniques? If someone says in good faith that a fingerprint matches and it turns out it was a false positive it should be slander? Shoeprints? DNA?

Clearview AI is providing a service in good faith like all these other things. It's up to the police and courts to use that information correctly.

Furthermore notice that this guy was only arrested, and the evidence bar for arrest is lower than sentencing. I don't think we can even say the system malfunctioned here actually. He was arrested because it seemed probable he did it. That's how it's supposed to work.

Then he was released and should be given routine compensation for being arrested and turning out to be innocent.

a year ago

bb88

I think this is why I don't think slander would work. Clearview AI didn't force the cops to arrest him, or say with 100% accuracy it was him. CVAI could have an accuracy of 99.9%, and the defense would be "Well see you were that 0.1% incorrect!"

It's the police that need to do the investigation to make sure the guy arrested is the same person on the video. But then they could say too, "Well CVAI said it was a near certainty -- we just took their word for it! It would have wasted the taxpayers money to do a more thorough investigation -- afterall there's always some implicit uncertainty in standard policework!"

I think a more interesting question is something like: Does Clearview AI fudge their accuracy numbers? Would the true 80% likelihood of you being the perp be more or less slanderous than a 99% likelihood?

a year ago

alixj

That would be a very weak excuse from the police. The same problem occurs even with DNA testing: https://www.stanfordlawreview.org/print/article/the-rule-of-....

a year ago

JohnFen

> He was arrested because it seemed probable he did it. That's how it's supposed to work.

Which is itself a bit of a problem, as having an arrest on your record, even in the absence of a conviction, a trial, or even if you were found innocent, is rather damaging. It adversely impacts your ability to rent housing, for instance, or get certain jobs.

a year ago

PeterisP

This is quite interesting; where I'm from, having an arrest on your police record is something that neither your landlord nor an employer would be able to see. Heck, even if you were convicted, that would generally get removed after a certain number of years for many lesser crimes.

a year ago

sseagull

Maybe officially. But here in the US arrests (not just convictions) are sometimes reported reported the local newspaper (“police blotter”). A search for person + town could find it.

Also there are lots of scummy private companies that collect this info for background checks. Really scummy ones will offer to remove that arrest for a fee.

a year ago

JohnFen

> having an arrest on your police record is something that neither your landlord nor an employer would be able to see.

In the US (or at least in my state), I believe that arrest records and criminal histories are public information.

Interestingly, if you are convicted of a nonviolent crime, it's not usual for that crime to be expunged if you did everything you were supposed to do (restitution, stay out of trouble, etc.) If that happens, then the conviction isn't in the records at all anymore -- but your arrest record still is.

a year ago

FpUser

This sounds like invented by Stalin. Is this how it works in the US? This is totally disgusting. Well it may be the same in Canada as well judging by what info on a subject is available online. Innocent until proven guilty my ass. How does it manage to be in free and democratic society?

a year ago

bilbo0s

May be totally disgusting, but, yes, that's how it works in the US.

a year ago

FpUser

Any reason why it can not be legally challenged and abolished? Or at least keep the record sealed from the regular prying eyes of the companies / orgs?

a year ago

bilbo0s

Um. Well, approaching the subject delicately, it's one of the tools the system uses to maintain our "societal power hierarchy". Is that the politically correct way to say it?

Put it this way, traditionally in the US, the people who would challenge this state of affairs fall mainly within a certain group that has historically occupied the lower part of our "societal power hierarchy". Conveniently, this sort of "mistake", tends to happen to people belonging to that same group far more often than it happens to others. Which saddles that group with the problems outlined in the comment we're discussing. It also saddles them with the concomitant consequences with respect to their ability to advance in society. (Since they can get neither a good job nor a good place to live after their arrest.) That they were innocent is a nuisance to the system, not really a deterrent to the system. Certainly not a reason to change the system in the eyes of a lot of Americans.

a year ago

FpUser

So you think something like ACLU / whatever other human rights orgs share similar view?

a year ago

bilbo0s

They may or may not, but until they find a majority of supreme court justices to agree with them, (which judges, incidentally, are pulled from the rest of America), it doesn't really matter in our system. That's kind of the way our system works. The people on the bottom, are there for a reason. And there are many, many safeguards in place to keep it that way.

a year ago

FpUser

Ouch. Sounds very depressing.

a year ago

JohnFen

A judge can order everything sealed if you can convince them to. Otherwise, everything is totally legal and no challenge is possible. To fix this requires changing the law.

And this sort of thing is state law, so it would have to be changed in each state.

a year ago

TomK32

> Clearview AI is providing a service in good faith like all these other things. It's up to the police and courts to use that information correctly.

Clearly the AI is more intelligent than the police officers using it and this needs to be addressed in a transparent way to allow for mistakes to be discovered faster. Transparency was absolutely lacking in this case. Heck, everyone, from Clearview staff to the judge and officers involved in the arrest and the victim should have been told "this AI had a 99% match, do you want to challenge it?".

a year ago

sobkas

> I don't think it's nearly that simple. Do you want the same for other techniques? If someone says in good faith that a fingerprint matches and it turns out it was a false positive it should be slander? Shoeprints? DNA?

Every technique you listed have problems that prevents them from deciding in good faith if there is a match. Sometimes we can find out probability of match but most of the time even that is out of reach, we just don't know.

a year ago

jcranmer

In US defamation law, the only answers you should ever give to "is this defamation?" are "hell no" and "...maybe..." And offhand, this is a case where "hell no" seems to be the correct answer.

You're missing several things here, so let's break it down.

The first, and most important thing, is that a defamatory statement must be a false statement of fact in a nonprivileged context. While "false" is easy to understand (especially given that any nitpicking gets filed under the intent bucket), "statement of fact" is a confusing thing that's basically half the lecture on defamation by itself. But one of the things that expressly isn't such a "statement of fact" is a conclusion based on disclosed facts (even if the logical reasoning used to support the conclusion is fallacious). It's also worth noting that statements in some contexts can never be defamatory--for example, legal pleadings can never be defamatory (but press conferences about legal pleadings can be!).

As we apply that to this situation, I strongly doubt that any statement is actually defamatory. We don't know what the company messaged to the police department, but based on the quotation from the affidavit for the arrest warrant, the message probably was along the lines of "facial ID on this surveillance footage matched this person" (which is a true statement of fact), and any statement that moves to a conclusion that said person committed the crime would fall into the "conclusion based on disclosed fact" non-actionable statement. Furthermore, it's possible (I'd have to look up jurisprudence here) that the statement to the police officer is a context in which nothing can be defamatory.

The second thing to point out here is the requisite intent. As the person in question is not a public figure, the standard here is actually merely negligence--the person making the defamatory statement essentially has to be in a position where they could have discovered that the statement was false with reasonable efforts that they failed to undertake.

While it's not relevant to this case, your notion of reckless disregard is incorrect. In defamation contexts, "reckless disregard" means "entertained serious doubts of the truth of the statement." I can't really envision a plausible scenario where you could show that Clearview AI meets "reckless disregard"; even an email thread where developers talk about the false positive rate being tuned too damn high likely wouldn't qualify since it's not specifically about any individual statement they send out.

The final point to make is that the burden of proof of all of this in US defamation cases is on the plaintiff. Even in the initial complaint, the plaintiff has to allege facts with some degree of specificity (and plausibility) to show that the defendant had the requisite level of intent (be it negligence or actual malice). But before all that, the actual defamatory statement needs to be alleged... and notice that we don't even know what Clearview AI told the police department. This case is very far from a slam dunk, so far that I would be very worried about facing an Anti-SLAPP motion (which would mean having to pay the defendant's attorneys as well as my own should I lose) on filing it were I the plaintiff.

a year ago

sjsinklar

[flagged]

a year ago

Habgdnv

If I write a small program using OpenCV (and maybe Unity), that tracks a pinball across a table for my personal enjoyment at home, and the police decided to utilize this technology to identify suspects, the question arises: should the aggrieved suspects sue me or the police?

I mean, it's not the tool, but the user of that tool that should take responsibility. As I remember, they must be instructed that this is not a silver bullet and may make mistakes.

a year ago

bigmattystyles

I always wondered the same about credit reporting bureaus.

a year ago

mindslight

A fine example of regulatory capture. The "Fair" Credit Reporting Act 15 USC 1681h (e):

> Except as provided in sections 1681n and 1681o of this title, no consumer may bring any action or proceeding in the nature of defamation, invasion of privacy, or negligence with respect to the reporting of information against any consumer reporting agency, any user of information, or any person who furnishes information to a consumer reporting agency, based on information disclosed pursuant to section 1681g, 1681h, or 1681m of this title, or based on information disclosed by a user of a consumer report to or for a consumer against whom the user has taken adverse action, based in whole or in part on the report except as to false information furnished with malice or willful intent to injure such consumer.

This country desperately needs a GDPR equivalent. One that does not except financial surveillance bureaus, the healthcare industry, or any other quasi-governmental organization that abuses our personal information.

a year ago

bnjms

We need another word when the capture is beneficial to the government because it solves a quasi governmental problem which is easier without oversight.

a year ago

jessaustin

This crap does not benefit any defensible purpose of government. Probably it does benefit certain government employees. The principal-agent problem appears again.

a year ago

smcin

How do you define the principal-agent problem when appied to govt, though? Govt and its departments are not 'owned', and cabinet secretaries and even Presidents are not principals. Who is the 'principal': that particular govt's most powerful donors and lobbyists? Also, govt has both career and political appointees, the latter can change every 4/8 years. So seems to me there are multiple groups of agents/parties.

So when you say 'does not benefit any defensible purpose of government', is that a statement about political science, rather than two-party govt system with lots of lobbying? I mean it seems like any policy you could concoct would benefit some (posibly small) interest-group of people somewhere, unless it was 100% wasteful.

[0]: https://www.investopedia.com/ask/answers/041315/how-principl...

a year ago

jessaustin

No one who has read your link would then question whether the chief executive is a principal: of course not. In polities who aspire to representative government, agents are anyone in public employ. Principals are the rest of us. Our interests are not served by solving problems without oversight, as GP suggests.

a year ago

smcin

But still unclear which among us is the Principal, in US-style democracy? Principals are only the tiny few among us who control election outcomes, through funding/ influencing/ lobbying or (rarely) affecting the outcome in one of the 8% of House seats that are still competitive [0]. Even if you believe it is all the rest of us, we certainly aren't equally influential principals.

> Our interests are not served by solving problems without oversight, as GP suggests.

Sure, that's a given. But many of us think the oversight is working on our behalf, yet isn't. (Compare e.g. the sham of the TikTok oversight hearings vs the not-very-effective Facebook ones.)

[0]: https://www.politico.com/newsletters/weekly-score/2023/02/27...

a year ago

bigiain

> This country desperately needs a GDPR equivalent.

Hmmm, I wonder how hard it’d be to lawyer up and force Clearview (and OpenAI?) under GDPR (or CCPA?) to remove every piece of personally identifying information about me?

Clearview: “We have a 99.7% certainly match for the Zodiac Killer. The name is [deleted as per GDPR request] and the link to their mugshot photo is 410 Gone. Deleted as per GDPR request”

a year ago

mindslight

Pictures of your face are most certainly personal information, so with proper deletion (or lack of consent in the first place), there should be nothing to match against.

a year ago

bigiain

Good point.

I wonder how low a resolution in facial keypoints you would have to go to have any plausible claim that the data is no longer PII?

“We don’t have any PII as covered by the GDPR. We do however have this list of 15 xy coordinates here, that coincidentally happen to match up with the output of a facial keypoints that might be generated if somebody _else_ had a photo of you and ran it through a keypoints extraction algorithm…”

I’d probably take a day off to do some justice tourism and visit the courtroom the day a lawyer for Clearview tried to argue “My clients product, which they sell as a method of personally identifying individuals, does not store any personally identifying data for individuals it identifies, your honor.”

a year ago

realusername

> I wonder how low a resolution in facial keypoints you would have to go to have any plausible claim that the data is no longer PII?

It doesn't matter how low the matching resolution is, they still need to store the whole image somewhere to show the match and that is 100% personal information

a year ago

Aransentin

Bayes' theorem is ruthless. Even if your AI is 99.99% accurate, if you have one true positive and scan everybody the vast majority of the people you flag will be perfectly innocent. The people deploying the system and police are ignorant of the statistics and base rates, thinks that it's only 0.001% likely you are innocent, and chuck you in jail because of it.

a year ago

sseagull

Ahh the the Base Rate Fallacy - always something to keep in mind. A lot of people fall for that one.

https://en.wikipedia.org/wiki/Base_rate_fallacy

a year ago

TomK32

I do wonder if the AI did stop after a 99,9% match or tried the whole database and there were actually a dozen 99,9n% and 99,8% matches. That meta information should be also shown to the officers requesting a search.

a year ago

treis

This isn't a technology story. This is a police made up evidence story. They claimed a source told them that this guy did it. Which seems impossible according to the facts the NYTimes laid out.

a year ago

joelfried

There is more to it than that if indeed he was flagged by a facial recognition pass triggered on a toll booth integrated into some warrant database and one policeman clicking "good enough" too easily. How deeply have these systems integrated themselves into the day to day of police investigations? Are any digital warrant requests ever denied? What sorts of protections are there to make certain that the person on the other side of the warrant request is actually even a police officer?

A hundred years ago there was a real cost in time and effort and interpersonal relationships to get a judge to sign off on a warrant at a weird hour. Are the reductions of those costs brought about by using technology in this way a net win for society?

The policeman in this actual case did something wrong. Did he, out of a motivation to increase his arrest record, find a random black man across state lines to try and arrest? Did he, out of malice, choose to target this person? Did he, out of laziness, not look too closely at two pictures side-by-side and click a "Request Warrant" button? How easy, exactly, is this mistake to make? That question only makes sense because of the technology in the story.

a year ago

danso

How is a story about a misuse of technology not a "technology story"?

The "evidence" would not exist if Clearview AI (or any similar vendor) did not purport to offer a reliable matching algorithm and expansive dataset. It's not as if the police randomly picked out someone to frame, and then used software to fabricate evidence for the warrant.

a year ago

treis

But they did randomly pick someone out and framed them. That's the story. Not that they randomly picked someone using ClearAI rather than doing so using mug shots, or yearbooks, or by driving down the street.

a year ago

danso

> Mr. Calogero went to the store and talked to the owner, who showed him a still from a surveillance camera. He realized that one of the alleged fraudsters looked like Mr. Reid, but the man was heavier...

> “The guy had big arms, and my client doesn’t,” Mr. Calogero said. A Jefferson Parish sheriff’s officer insisted it was a “positive match,” language that made Mr. Calogero believe that facial recognition technology had been used, and he spoke to the New Orleans news outlet NOLA.com about what he believed had happened.

The man's lawyer says he (the client) looks like the suspected thief in surveillance footage. How is that "random"? How would the wrongly suspected man have even been known to the police without the use of Clearview's database and matching algorithm?

a year ago

wahnfrieden

Technology launders abuse from police. You can see for instance breathalyzer technology which is closed-source, rife with bugs which misclassify, and provide an obfuscating cover for piggy action.

a year ago

FpUser

The friggin thing shows positive if one just ate Boston Cream doughnut.

a year ago

petsfed

I know this outside of the scope of the article, but I've seen this a bunch of times when HN commenters claim that a given topic is not relevant for HN. The gist of the claim is basically "$badaction was already illegal, this new tech did not enable $badaction, therefore we don't need to discuss this further". The claim always seems disingenuous, because it ignores a crucial fact: prior to the tech, the friction opposing a bad action was sufficient all by itself to keep the rate of the bad action within acceptable limits. Its a "nothing wrong with nuclear weapons, if sharp rocks are still allowed too" sort of argument.

Fine, yes, this is a police-made-up-evidence story. But its also a technology-enabled-a-psuedo-scientific-confidence-interval-to-make-the-made-up-evidence-more-convincing story. Dismissing it out of hand really downplays why facial recognition (or algorithmically generated feeds or the banning of human content moderation or...) is so fraught. There may well be a solution to the problems appearing, but we're not going to arrive at those solutions without discussing those problems as, well, problems.

a year ago

kayodelycaon

A police made up evidence story wouldn't get as many clicks. A lot of people get arrested for the crime of "driving while black".

a year ago

danso

The subject of this story wasn't arrested for "driving while black". His arrest warrants, based on AI-face recognition as a source, was signed on July 18. His car was pulled over in November after an officer learned of the warrants when running his plates.

"People get arrested on false pretenses since the invention of police" feels like an overly reductionist response to stories scrutinizing technology

a year ago

oceanplexian

Officers don't run plates any more, it's all done with ALPR. All the various constitution-violating "tools" over the past few years have been successfully combined to create a fully automated, computerized police state.

a year ago

sidewndr46

I thought it was because the officer "smelled weed". You can't just pull someone over for being a minority you know right?

a year ago

voakbasda

Any cop will tell you that they can come up with a "valid" pretext to make a traffic stop, after following a vehicle for a short time. The officer knows that their subjective claims will be treated as factual by the courts.

So, yeah, a bigoted cop can pull over minorities and trivially manufacture some rational reason that will justify their actions. There's always a "reason".

a year ago

bsder

You might want to ask some minorities about that ...

a year ago

ripe

Apart from the facial recognition technology, there's also a second technology that was used and possibly misused. From the article:

The friction of getting a warrant has been eased by technology. The Jefferson Parish Sheriff’s Office uses an “eWarrant” service, CloudGavel, for which it paid $39,800 last year. It’s an app that allows officers to request digital signatures from judges. “Law enforcement officers can now get an arrest warrant approved in minutes,” the company’s website states.

Many civil liberties advocates actually favor electronic warrants; they allow judges to more easily review decisions made by the police and eliminate a complaint from officers that it’s too hard to get a warrant. But advocates said it would be worrisome if judges were simply clicking a button without asking questions or providing sufficient scrutiny.

“There are real questions about whether it increases the incidence of judges rubber-stamping warrants,” said Nathan Freed Wessler, a deputy director with the A.C.L.U.’s Speech, Privacy and Technology Project.

a year ago

alixj

Excellent point. The judge should've known better than to approve this warrant on the basis that they photographed the whole country and found a resemblance 500 miles away.

a year ago

TomK32

But according to the article the judge wasn't told this.

a year ago

[deleted]
a year ago

radicaldreamer

Everyone should opt-out of Clearview AI here: https://www.clearview.ai/privacy-and-requests

Much easier for EU and California residents

a year ago

JohnFen

> This tool will not remove URLs from Clearview which are currently active and public. If there is a public image or web page that you want excluded, then take it down yourself (or ask the webmaster or publisher to take it down). After it is down, submit the link here.

That's not an opt-out. That's only a request that they remove the index to an image that has already been removed from the web. You can't ask them to remove all photos of you, nor will they remove a photo that is still up on the web.

In other words, it's the sort of bullshit that we can expect from an evil company like ClearView.

a year ago

realce

What incredible arrogance, it makes my blood boil. You're supposed to either know what all their sources are and do all of your own work one at a time, or just give them links to the things you most want hidden, an extreme vulnerability.

a year ago

JohnFen

Not only that, but ClearView also incorporated driver's license photos into their database -- and there's literally no way that you can have those removed.

So even the meagre facility they offer here is completely meaningless.

a year ago

radicaldreamer

You need privacy laws like GDPR and California's in your state.

a year ago

JohnFen

Yes, very much so.

a year ago

stevenjgarner

Yes my cousin's pet ferret's website desperately needs the burden of a GDPR-compliant Cookie Consent Notice. It has made European online web presence so burdensome.

a year ago

bluefirebrand

Your cousin's pet ferret's website likely has absolutely no business setting anything related to cookies in the first place.

So.. yeah.

a year ago

jessaustin

Cookies are used for things as simple as nighttime color theme preference. Someone more knowledgeable than either of us will probably point out that GDPR doesn't care about cookies like that...

a year ago

aaronmdjones

Actually I'm betting on the more knowledgeable to point out that browsers signal light/dark theme preferences automatically, without cookies.

https://drafts.csswg.org/mediaqueries-5/#prefers-color-schem...

https://caniuse.com/?search=prefers-color-scheme

a year ago

jessaustin

Yes, that exists, obviously. However, it's perfectly valid for someone to prefer a particular site to override an OS or browser setting.

a year ago

PeterisP

Yes, GDPR doesn't care about cookies like that, it starts to apply for personally identifiable cookies (e.g. ad tracking which tries to make unique id's) but not for generic "preference=nighttime" ones.

a year ago

Dylan16807

How about the ferret website doesn't track people.

a year ago

stevenjgarner

It doesn't track people in the sense of exploiting their web presence, but it does use cookies to give visitors their own preferences to see all kinds of targeted ferret nonsense. The visitors love that. They don't have to dig through the website to find the content they always want to enjoy.

a year ago

tim333

While GDPR has good points, I think the cookie pop up is not one of them. I've probably clicked the ok button on a few hundred now without really reading any or it making a difference to my life apart from annoyance. I wonder if those have helped anyone anywhere?

a year ago

notjulianjaynes

The California opt out asks:

"Are you a California resident or are you authorized to submit this request by a California resident and submitting this request on behalf of that California resident?"

There's no language stating the photo needs to be of a resident of California. So could I just call up my friend in LA and say "hey is it cool if I request Clearview to delete me from their database on your behalf?"

a year ago

giraffe_lady

This is the real risk of AI in the short and medium term. Everyone who fronts like a malicious superintelligent being is the big problem should take a couple steps back and refocus on the harms being caused by it now. And particularly should understand the damage as being a complex combination of human social factors exacerbated by AI, rather than a fundamentally new thing we've never seen before.

a year ago

msla

We certainly don't want biases, like the idea that all short men have a complex:

https://news.ycombinator.com/item?id=35330244

a year ago

giraffe_lady

Having a wholesome & productive evening are we?

a year ago

diebeforei485

This is nuts. Automated image matching results should not treated the same priority as detective work. It should not result in warrants automatically being sent out.

a year ago

swalling

The standards of evidence in the justice system are universally pretty weak. From suspect lineups easily manipulated by police to phrenology-level bullshit like bite mark analysis, this is why fair estimates are that up to 10% of convicted criminals are innocent. https://en.wikipedia.org/wiki/Innocence_Project

a year ago

poink

There was a time when I questioned if it was worth all the social consequences to simply not be associated with my real name online.

lol. lmao.

a year ago

bagels

a year ago

can16358p

There should be HUGE penalty and FULL compensation for falsely accusing and arresting and innocent person.

a year ago

jgaa

If an innocent person is arrested because the police did not thoroughly examine the evidence, then the police officers and the prosecutor should be reviewed and criminally charged if they broke the law. No exceptions.

a year ago

99_00

The cops didn't even look at the suspect and compare him to the surveillance image.

The judge approved the arrest warrant on the basis of AI alone.

And people want to blame Clearview? Why?

Do you actually care about wrongful arrests? Because when you take Clearview out of the picture, you still have these incompetent or overworked human beings cutting corners, arresting people, and having those people take a plea deals because they don't have money for lawyers so they can make their numbers and get whatever incentive they are chasing.

a year ago

low_tech_love

”Clearview scraped billions of photos from the public web, including social media sites, to create a face-based search engine now used by law enforcement agencies.”

What the *%#€!?

a year ago

js2

Brazil (1985, Terry Gilliam) - Mistake? Haha. We don't make mistakes.

https://youtu.be/wzFmPFLIH5s

a year ago

fwlr

A classic case of “facial recognition technology inherits the morality of its users”. I expect if you investigate the police department that issued the arrest warrant you’d find an awful litany of arrests based on flimsy evidence. Face rec tech allows that shoddy policing to scale dramatically, transcend state borders, etc. Other commenters have mentioned the “five minute online judge” eWarrant tech and it certainly deserves credit for a supporting role.

a year ago

croes

This is another proof that "if you have nothing to hide, you jave nothing to fear" is wrong".

a year ago

JamesAdir

The framing of the headline is wrong - Police failed to do their job properly. This isn't related to Clearview or the laptop they write their reports on. Putting the technology aspect in the headline is just to lure in more clicks. Would expected more from a publication like the NYT.

a year ago

danso

non paywall: https://www.nytimes.com/2023/03/31/technology/facial-recogni...

Excerpt:

> His parents made phone calls, hired lawyers and spent thousands of dollars to figure out why the police thought he was responsible for the crime, eventually discovering it was because Mr. Reid bore a resemblance to a suspect who had been recorded by a surveillance camera. The case eventually fell apart and the warrants were recalled, but only after Mr. Reid spent six days in jail and missed a week of work.

> Mr. Reid’s wrongful arrest appears to be the result of a cascade of technologies — beginning with a bad facial recognition match — that are intended to make policing more effective and efficient but can also make it far too easy to apprehend the wrong person for a crime. None of the technologies are mentioned in official documents, and Mr. Reid was not told exactly why he had been arrested, a typical but troubling practice, according to legal experts and public defenders.

a year ago

lotsofpulp

> Mr. Reid’s wrongful arrest appears to be the result of a cascade of technologies

This statement is clearly false.

As evidenced by

> and Mr. Reid was not told exactly why he had been arrested

Malfeasance is the cause, the level of which should result in prison sentences for the police who made the decision to deprive a person of their freedom and more for a week.

a year ago

joe_the_user

Good call,

The article is effectively saying "(accepting as a given that police charge and arrest people on entirely ad-hoc, hence lawless grounds...), Mr. Reid’s wrongful arrest appears to be the result of a cascade of technologies". But naturally we shouldn't let that just go by.

a year ago

than3

I'm sure they would simply say it was from improper training since most police recruits are never taught the law, and are often only dealing with hardened criminals for the first 5 years on-the-job.

a year ago

joe_the_user

The idea that any police group (experienced, inexperienced, etc) "are often only dealing with hardened criminals for the first 5 years on-the-job" seems absurd on it's face. Police drive around an area and deal with the people and situations that arise in the area, which is to say they will inherent encounter average people most often. Violent crimes are less common than other crimes virtually anywhere.

Edit: wow, OK, things beyond even me.

a year ago

than3

In many areas, before you can be assigned to roles that interact with the public you generally have to have a number of years handling transfers, which often include hardened criminals for medical, court, jail/prison.

It might be absurd, but it is common practice.

a year ago

sidewndr46

Are police in the US obligated to tell you why you are being arrested?

a year ago

than3

I'm not sure about that, but they certainly aren't required to provide information needed to validate a warrant is legitimate.

California recently had a gang round-up where warrants that were provided as the basis for searches for properties in question had almost all information needed to verify the validity of the warrant redacted (even the address for the property in question being presented at, at least according to local news).

No one had any idea what was going on or whether the warrants were even legitimate, seemed like a play out of East Germany's playbook before the wall came down, just short of the gestapo.

a year ago

sidewndr46

Do you have a source for any reporting about this?

a year ago

jfengel

No, not constitutionally. But some states, including New York, do have laws that say that they must.

Also, if there is a warrant, they generally have to show it to you.

a year ago

joe_the_user

US Police aren't required to say anything when a person is arrested. They can just grab you and hustle you into a car without a word.

When a person is charged with a crime, the police expected to supply evidence but they can come up with excuses not to. Once the case goes to trial, all the evidence is supposed to be available to defense attorneys. But since plea bargaining is common, police may not have to come up with the evidence ever.

Overall, US legal procedure is full of things that are absolute rules for civilians but just sloppy average suggestions for cops.

a year ago

[deleted]
a year ago

barbazoo

Yikes. And there is very little people can do to prevent this until someone makes this their election platform to change the system, right? I'm assuming this is all constitutional.

a year ago

sitkack

How can non-official documents exist inside a government organization? What divides the two?

a year ago

joe_the_user

Bureaucracies operate by standard procedures. Official documents are produced by standard procedures and official documents are often available to the public on a standard search - in the case of police, official documents would also be available to judges, prosecutors and defense attorneys.

Unofficial documents are produced by official not using standard procedures in any variety of ways (searching a private company's database in this instance). Generally, doing this is against the regulations of a bureaucracy. But American police view themselves and often treated as above regulations, even their own regulations. So American police often produce and keep unofficial documents with no consequences.

a year ago

[deleted]
a year ago

jimnotgym

Maybe we should just say that all 'miracle' technology is ok as corroborating evidence, but not as prima facie evidence. DNA, fingerprints, ai facial recognition...

a year ago

petermcneeley

I see; the problem with these giant draconian surveillance systems is that they are not always accurate. Dont worry I am sure they can solve that problem.

a year ago

SN76477

We need to be skeptical of technology again.

a year ago

hungryforcodes

For a moment I read it as police raided Clearview AI. Sadly this is not the case.

a year ago