Molotov cocktail is hurled at home of Sam Altman

135 points
1/21/1970
4 hours ago
by enraged_camel

Comments


strongpigeon

It is a bit scary how people seem to genuinely be OK with violence (see this reddit thread [0]). Is just me or does it feel like the overall "temperature" has gone up.

[0] https://www.reddit.com/r/ChatGPT/comments/1shugf8/firebomb_t...

4 hours ago

lazyasciiart

Well, dropping bombs and threatening to end a civilization certainly made me think the temperature had gone up. I’m not sure I think a single attempted act against some guy is worth being worried by against that backdrop.

an hour ago

dan-robertson

I think much the reaction to the Brian Thompson killing also seemed ok with the violence despite it happening before the events you describe, though I guess that could be an outlier.

9 minutes ago

0dayz

Not defending them or even Luigi but I would argue a lot of it is the abysmal labour institutions the USA got (lots of union busting, few modern laws against modern exploitation and classical institutions are undermined politically and legally).

And the growing class divide in the USA I think is the reason why folks are increasingly seeing violence against the upper class is seen as the only option.

Again doesn't mean it makes it right, but it explains why it is almost only an US phenomenon.

an hour ago

JumpCrisscross

> explains why it is almost only an US phenomenon

Genuine question: is it?

18 minutes ago

sethops1

Certainly not. Nepal's Gen Z literally overthrew their government due to inequality and corruption.

https://carnegieendowment.org/research/2025/09/nepal-gen-z-t...

10 minutes ago

baggy_trough

What do you mean by “or even Luigi”?

9 minutes ago

scoofy

This is exactly the point of part one of Fist Stick Knife Gun: A Personal History of Violence, by Geoffrey Canada. Unequal or lack of access to the executive branch of government will create a culture of vigilantism and lends itself to organized crime as a replacement for the policing arm of the state.

https://en.wikipedia.org/wiki/Fist%2C_Stick%2C_Knife%2C_Gun

People become okay with vigilante justice when they see the executive branch as compromised, just look at the insane plot/ending of the film Singham.

Many people see this happening in the US. We should expect to see more vigilante justice and organized crime if we see the executive branch as having a significant principal-agent problem.

3 hours ago

yfw

We gave up violence and made the state the authority but thats contingent on the social contract being upheld.

19 minutes ago

sumedh

> just look at the insane plot/ending of the film Singham.

What does that even mean?

7 minutes ago

spaghetdefects

I wonder how much the complete impunity of those involved with Jeffery Epstein has destroyed the faith in the executive branch? People like Leon Black, Les Wexner and a couple of presidents not only escaped justice, but pretty much any scrutiny by any institution, media included. I think it's hard for people to look at that and not think they need to take the law into their own hands.

22 minutes ago

dan-robertson

I’m surely out of the loop here but what crimes are there evidence of Leon Black or Les Wexner having committed?

14 minutes ago

spaghetdefects

Participating in the child sex trafficking ring, although Wexner's involvement goes far deeper.

10 minutes ago

hnthrowaway0315

I'm not saying that violence is legal -- which is definitely not. But it is part of the "packages" and totally depends on whether the one wants to use. Historically violence has been a very...effective tool.

When people feel that law and order do not protect them, some eventually will go "the extra mile" (somehow managers always like this phrase). It's not something we can prevent. It is human nature. I guess super riches really like AI because this gives them extra protection.

an hour ago

yfw

Seems like its legal if you can pay for it today.

18 minutes ago

Fricken

Of course violence is legal. Laws themselves carry no weight if they aren't backed by a credible threat of violence.

43 minutes ago

krapp

Violence by the state is legal. Violence otherwise tends not to be.

33 minutes ago

yfw

See ICE murders.

17 minutes ago

mmooss

> it is part of the "packages" and totally depends on whether the one wants to use.

Could you explain what packages are and what depends on (what?)?

> Historically violence has been a very...effective tool.

This is dramatic sci-fi for anarchists of all political stripes.

The critical reality to understand is that violence is the most ineffective tool, causing catastrophic harm for others and outcomes that the perpetrators rarely control or foresee. Revolutions can overthrow status quo power but what follows is rarely what the perpetrators aimed for. The same happens in warfare - the outcome is rarely what anyone envisioned at the start, a fundamental lessons that experts try to teach hot-headed amateurs that think warfare will solve their problems.

It also establishes violence as legitimate - usable by everyone else too, a very bad outcome and the opposite of the rule of law, incompatible with freedom; it elevates violence and destruction over life and liberty. In contrast, the American Revolution was founded on principles of freedom and law (for example, in the Declaration of Independence), did not embrace violence as desireable, and laid it out for example in the Declaration of Independence.

The most successful societies have freedom, the rule of law, and allow violence only as a last necessity to restore freedom and the rule of law.

an hour ago

jltsiren

The critical reality to understand is that people have always used violence. If they don't believe that they live in a successful society, or if they believe that the success of the society is not distributed fairly (or in a way that benefits them), violence starts looking attractive.

Enlightenment and industrialization created societies that were fairer, wealthier, and more free than anything before. They also created ideologies such as communism and nationalism that killed hundreds of millions. If your ideas are good and successful in the long term but create poverty, suffering, and feelings of unfairness in time scales people care about, there will be violence.

Compromises are the key tool in preventing violence. Unfortunately, the word itself carries negative connotations in too many languages, making effective compromises less likely.

3 minutes ago

hnthrowaway0315

I don't know, but just look at Iran and US. Where is "rule of law"? Who is going to give it magically?

Packages = ways to "adapt" to the challenges of the world.

an hour ago

mmooss

> look at Iran and US. Where is "rule of law"? Who is going to give it magically?

Rule of law - in this case, international law - has governed the Strait of Hormuz and relations between the US and Iran for decades. It's not magical or fantasy at all, but a very well-established and effective mechanism that has been the foundation of the most peaceful world arguably in human history. There is no valid argument that it doesn't work (saying it hasn't worked 100% of the time is not valid).

The Trump administration explicitly aims to destroy that rule of law. I think that's why they attacked Venezuala, Iran, civilian boats, etc. Stephen Miller advocates that power, not law, rules.

You can see the outcome when international law was used, and the outcome when it is intentionally destroyed: Look simply at the Strait, which had free navigation under international law, despite the extreme emnity between Iran, and the US and its Mideast allies.

And now, with international law under assault, free navigation has ended. To be clear, I don't only mean the US's and Israel's attack: Developing nuclear weapons would also violate international law, and maybe so does developing highly enriched fissile materials (e.g., uranium). I'm not sure about sponsoring insurgent proxies in other countries, but that has long been practiced by many countries, including the US and many in NATO.

The rule of law allows societies to function. We don't want the world or our communities to function like failed states - those people are poor, starving, and brutally oppressed.

26 minutes ago

spaghetdefects

> The Trump administration explicitly aims to destroy that rule of law.

It's not just Trump. Trump and Biden both shredded the rule of law for Israel. I think both parties being captured by a genocidal foreign government has caused mass dissolution with the ability of the US to act within any framework that brings justice.

18 minutes ago

cogogo

This is some impressive bothsidesism. Especially in the current context of an actual war.

a minute ago

jyounker

A lot of people in the US feel like they've already tried the nice way, and it's failed. Given the increasing wealth disparity between the haves and the have-nots, it's hard to argue otherwise.

21 minutes ago

calcifer

> In contrast, the American Revolution was founded on principles of freedom and law [...] did not embrace violence as desireable

That's pretty rich, since the United States only exists thanks to systemic, deliberate violence on a mass scale against the local population.

19 minutes ago

bdangubic

and has continued to this day with violence against non-local populations around the world

18 minutes ago

tptacek

These are message boards. The obvious sentiment, that firebombing attacks are awful (perhaps cut a little bit with "the perpetrator appears to be someone deeply in need of help) is boring. This is an availability bias issue: the only sentiments that actually spool out into threads are edgy. Once you learn to spot these effects, message boards make a lot more sense and are less jarring.

an hour ago

frinxor

And the same applies to HN? Edgy messages make it to the top, and the reader should learn to react accordingly (in what way?)

24 minutes ago

tptacek

Mostly just by not being emotionally destabilized by edgy comments, is all.

18 minutes ago

givemeethekeys

Silent corruption at the top causes rot at the bottom. Obvious corruption at the top causes desperation at the bottom.

39 minutes ago

layer8

It used to be a little less violent: https://www.youtube.com/watch?v=HEMbp6Epfz8

3 hours ago

yfw

What do you call denying healthcare?

21 minutes ago

hungryhobbit

Crazy people have existed since the dawn of time: I see nothing at all new here about a crazy person doing something crazy.

35 minutes ago

jyounker

It's due to the widening inequality. Nick Hanauer has been talking about this for over ten years: https://www.youtube.com/watch?v=q2gO4DKVpa8

29 minutes ago

AlexCoventry

I don't condone violence, but it's hardly surprising that people would resort to or support it in this case, considering that by stepping in where Anthropic refused to help the US military, sama essentially agreed that OpenAI will serve as the IT Department for Trump's secret police. Either that, or he's willing for OpenAI to endure a similar punishment when he refuses the inevitable demand to assist with domestic mass surveillance.

44 minutes ago

danny_codes

GINI index in SF is pretty close to Brazil.

As income/wealth inequality grows expect class violence to grow until there is a revolution. We let rich people get too rich and this is the consequence.

Sam has so far lost say $100B so far, and he is compensated by already being a billionaire. You can see how this might lead to disillusionment with the system.

32 minutes ago

oatmeal1

People are okay with violence when democratic means (if first past the post even counts) do not solve their problems.

an hour ago

bdangubic

people are never OK with violence against human beings.

19 minutes ago

DrProtic

Yet we live in a very violent world, some people are definitely ok with it.

Or I missed a hint and you’re dehumanizing them?

12 minutes ago

testing22321

The top comment there mentions the French Revolution.

You think people will put up with wildly accelerating inequality forever?

It’s going to explode, the only question is when.

an hour ago

paulddraper

The wild part about that is that the r/ChatGPT sub.

Which is very AI forward.

25 minutes ago

lo_zamoyski

It absolutely has. Both the Left and the Right have seared consciences and take no issue with murder and thuggishness as long as it's "their guy" doing it to "the other guy".

The world was never a wise and virtuous man's paradise, but it has been quickly sliding into ever increasing and monstrous irrationality. Give Plato's "Republic" a read and you might find it concerning how closely we exemplify the last stages of political and social decline.

27 minutes ago

mghackerlady

People are apathetic at this point. When a large amount of americans can barely afford to live while threatened with replacement while the economy booms on the backs of their claimed obsolescence, they don't care that a billionaire could've gotten hurt, especially when that billionaire is working against their interests.

3 hours ago

strongpigeon

I mean, it's also scary because I don't think it works. People should demand a new deal and lobby for that. Throwing molotovs doesn't help with that.

3 hours ago

eschaton

What happens when lobbying for a new deal fails? Do the people just shrug and accept the fate their feudal lords have determined for them?

3 hours ago

nxm

and what happens when people don't want a new deal? Violence is ok then?

an hour ago

lazyasciiart

Thats what the Pinkertons were for, yes.

an hour ago

pixel_popping

It clearly did open a discourse on HN at least :)

3 hours ago

stackghost

>I mean, it's also scary because I don't think it works. People should demand a new deal and lobby for that.

The data has conclusively proven that moneyed interests prevail over the interests of the people. Every single time.

27 minutes ago

yoyohello13

> People should demand a new deal and lobby for that.

Lol, really? You think there is any chance of that happening in this current political climate? Any whisper at all of rights for workers is immediately shot down as Godless Communist rhetoric.

30 minutes ago

ZeroGravitas

He switched to supporting Trump after Trump repeatedly joked about someone breaking into a San Fransisco home to attack the owners with a hammer.

So the temperature has been high for a while and he's on board with it.

2 hours ago

yoyohello13

Get ready for more. If the tech bros are right and millions of people loose their jobs and healthcare, we are in for a rough couple of decades. Millions of angry people, with nothing to lose and a bunch of free time, all with one name in their heads, Sam Altman. He better start working on his robot army.

38 minutes ago

therobots927

It is scary. You know what’s also scary? Being told a robot is going to take your job and healthcare away.

There’s a lot of scary shit going on.

4 hours ago

happytoexplain

Also scary: Seeing a comment this ostensibly un-controversial in grey.

3 hours ago

tptacek

There's nothing "un-controversial" about trying to mitigate a firebombing attack with a broad critique of capitalism. It's an edgy take, just own it.

an hour ago

pixel_popping

I agree it is scary, but why would a robot take healthcare away? Wouldn't that be the contrary?

4 hours ago

WBrentWilliams

The quickest way to rile up an existing mob is to make them fear their livelihood is being reduced or removed. The _robot_ is not taking away healthcare, but the effect of the robot existing hit directly at the livelihood of the masses.

In the US, health insurance is largely tied to employment. Health insurance, in a personal economic sense, reduces to being able to pay for healthcare. This policy is largely a left-over of World War II era employment policies. No one is taking healthcare _away_ from anyone (strictly speaking), but the ability to be able to _pay_ for healthcare is reduced to zero when employment ceases. Accessing the safety net is a separate skillset. This skill set becomes more difficult to achieve because the political class does not want to provide healthcare for everyone, only the worthy (their loyal voters).

I grew up in and am still a member of the precariat. I am educated and doing well, but I wear a well-polished pair of golden handcuffs due to how my ability to afford healthcare for myself, and my family, is tied to employment. Politically, I _do not_ like being tied to my employer by such a chain, but my arguments to change the system have been met with quite firm push-back.

3 hours ago

stvltvs

Insurance companies are using AI (whatever that means in this case) to make coverage denial decisions. That can be reasonably summarized as robots are taking away our healthcare.

3 hours ago

whimblepop

Link, please? I 100% believe this but I'm curious about the reporting by which you discovered this

2 hours ago

daveguy

Google this and take your pick:

ai decisions health insurance

Also, to be clear, I don't think violence is the way to confront the oligarch sociopaths. There is clearly enough momentum to fix a lot of the monopoly / anti-consumer issues over the next 4-8 years. Assuming Trumpty Dumpty doesn't try to put our military at polling places or some other anti-democracy putinesque bullshit like that.

an hour ago

ironman1478

There are stories about insurance companies using AI when determining if a claim should be let through or denied.

https://www.palmbeachpost.com/story/news/healthcare/2026/03/...

4 hours ago

kube-system

That is scary but the methods traditionally used to deny claims aren't really any better. I've had claims denied after they were explicitly pre-approved because of string literals not matching exactly.

an hour ago

pesus

It at the very least provides more cover to the ones denying the claims. They can blame it on AI in the hopes they're not the next one being targeted by vigilantes.

22 minutes ago

ChoGGi

My aunt worked for an insurance company while she was semi-retiring as a doc, she lasted a few months before she was too disgusted to continue.

AI isn't needed for insurance to fuck anyone over.

31 minutes ago

whimblepop

Because healthcare in the US is tied to employment. For most people here, losing a job means losing access to healthcare (partially or totally).

4 hours ago

cryptonym

Because the robot would take their job and having a job is a precondition to healthcare (may vary by country)?

3 hours ago

anematode

As far as I know, the US is the only country like this. But anti-AI sentiment is rising around the world.

30 minutes ago

sophacles

Well in the US you get healthcare from a job (either directly in the form of insurance or indirectly in the form the money to pay for healthcare). If the robot takes your job, it takes your healthcare too.

You know this, stop pretending otherwise.

3 hours ago

therobots927

1. Americans need a job to get healthcare

2. Robots take away jobs from Americans and the proceeds to go the owner (investor) class

3. Americans no longer have healthcare

Understand?

4 hours ago

pixel_popping

I understand (I'm not from the US), however, wouldn't healthcare in the US would get drastically cheaper (even eventually free?) if hospitals/clinics were composed of humanoids instead of humans?

3 hours ago

lazyasciiart

That’s the logic Keynes used to suggest that we’d all be working 15 hour weeks by now, with computers doing all the work.

Needless to say, we have discovered that productivity gains are not consistently converted into reduced costs and work hours.

an hour ago

WBrentWilliams

Interesting idea. I cannot say that I can answer affirmatively nor negatively. There are also human elements to be considered. Humans are status-seeking social creatures. There will always be a stain of humanoid-delivered care, no matter how high-quality, as being not as high quality of all-human delivered care. This is, status accounts for a lot.

I can also draw pictures of how dangerous humanoid care can be, as there is a possibility in a break in the chain of responsibility. If a human medical professional messes up, you (or your survivors) can sue and seek damages directly, as well as sue the hospital and insurance system (with mixed results).

With humanoids? Currently, the bar is higher as the entity being sued is not the hospital, nor a person, or even a team. The only entities that can be addressed are the corporation the runs the hospital and the corporation that produced the humanoid. These two entities have an incredible out-sized advantage in terms of sheer delaying tactics, not to mention arbitration clauses and other legal innovations. Most injured will simply give up, which is a legal win for the two entities.

In my opinion, humanoid care will take a large amount of time, damage, and treasure to lower the costs. No actor will willingly give up their cash flow. My view may be too strong.

3 hours ago

threecheese

This is definitely a potential future state, but not one I could imagine happening soon. Given that the robots which are currently deployed do not benefit people directly (and even the indirect benefit of lower costs or better investment returns appear to be captured by the upper tiers of the economy), we have no confidence that they would deployed to benefit anyone but their owners.

More likely near-term states are less rosy, given intelligence takes off.

2 hours ago

GOD_Over_Djinn

No, they wouldn’t get cheaper. The profit margins in the healthcare industry would get bigger.

31 minutes ago

fatbird

The price is set by how much providers can extract, not by their costs to provide. It's not at all obvious that a vast reduction in their cost of labour would translate to price reductions.

It's worth keeping in mind that in the U.S. the health marketplace is extremely complicated and cannot be analyzed with simple demand/supply graphs.

2 hours ago

wak90

Lol no

3 hours ago

misiti3780

the narrative im hearing is AI breakthroughs will drive the cost of healthcare to zero (i.e. Alphafold etc)

3 hours ago

JumpCrisscross

It’s a distinct minority. They’re convinced they’re the majority because everyone they talk to is in the same bubble, especially online. I saw the same thing with Mangione and Kirk and Pelosi.

an hour ago

pesus

Do you spend much time with people not in the tech world? I think you'd be surprised how many people hold similar sentiments, even if not to such an extreme, especially once you talk to people in the real world. I've heard far more support for this sort of thing in real life than I have online due to fear of repercussions.

Hell, even the president regularly calls for and promotes violence, so I don't think it's that much of a minority. The US was founded on it, after all.

30 minutes ago

JumpCrisscross

> Do you spend much time with people not in the tech world?

Most of it. Across the political spectrum.

> even if not to such an extreme

That’s precisely the point. There is a massive difference between doing or aiding and abetting such behavior, cheering it on, and giving into the impulse of “couldn’t have happened to a worse person” before self correcting. There are a few saints who reject the violence at first glance. But most people are in that self correcting phase, and the correction happens the more they learn about the specifics of the assault.

> even the president regularly calls for and promotes violence

To what numerical end?

27 minutes ago

kube-system

What I think is different today is -- regardless of how many people organically think this way -- social media is normalizing the idea. We're all being exposed to it.

It's only a minority of people who are radicalized, but it's a growing minority. Radical ideas are more accessible than ever for people to latch on to.

Radical views on violence, social relations, science, politics, distrust of institutions, etc are all way more common than they were in the 90s.

an hour ago

JumpCrisscross

> but it's a growing minority

I’d want to see this interrogated with rigor. The alternate hypothesis, and my null, is a relatively fixed fraction of folks is more connected and visible today than before.

29 minutes ago

newspaper1

How about the 190 school girls the US murdered in the very first attack against Iran?

an hour ago

JumpCrisscross

Yeah, the number of people connecting a potential war crime in a military operation to Sam Altman’s San Francisco residence with violent intent are slim.

23 minutes ago

newspaper1

I’m not saying this was due to war crimes. I’m saying war crimes blew the Overton window for violence wide open.

15 minutes ago

2dfs

I think youre misreading it entirely, doesnt surprise me given that you're a VC.

Here's one of the posts on that thread: "I mean one thing is to use AI or even ChatGPT as a product, and another is being aware of how billionaires treat the rest of the people

As for Sam, he also has pretty controversial views for how this whole thing will pan out and how he doesn't give a shit about the consequences it might have for the rest of us. Also more recently, the whole Pentagon contract thing"

People can both use LLMs whilst having a distasteful view of the leaders of the industry.

an hour ago

JumpCrisscross

> whilst having a distasteful view of the leaders of the industry

I have a tremendously distasteful view of a lot of Silicon Valley leadership. Doesn’t mean I want them to suffer at the hands of vigilante justice.

30 minutes ago

newspaper1

After watching children literally be liquified in Gaza for two years, violence directed at Sam Altman doesn’t even move the needle. Our entire human rights framework what obliterated by Israel (with the blessing and support of the US and Europe).

an hour ago

schainks

People are coming to a logical conclusions that:

- Some if not many jobs are at risk.

- AI Psychosis is actively tearing apart families and communities, after social media and opioids have already had a pass.

- Negative social outcomes are in the service of _making money_. Not money to pay taxes to fund a healthy society, but money for the people running these systems.

Humans that lack community, safety, and purpose will embrace more drastic means of exerting control over their lives at the expense of others, no?

It is probably safe to say the temperature has been firmly up for a while. And certain subsets of the population have come to trust their Dear Leader's embrace of violence as a solution, for sure.

an hour ago

whatever1

Jobs were already lost because of AI capital investments. None of the hyper scalers had the cash flow to support the target investment levels and had to reduce labor.

an hour ago

sophacles

You're just a smidge away from asking why they can't just eat cake...

3 hours ago

strongpigeon

I think you're extrapolating a lot from my comment... One can reasonably think something has to be done to address the current (and upcoming) economic situation and think that molotov cocktails won't help. Acts like these will likely make things much worse before settling into a new situation that's probably just slightly worse.

3 hours ago

sophacles

Wondering why people might want to resist their lives becoming worse at all just so some assholes can gloat about how much richer they became is literally the same as asking why they can't just eat cake.

Thinking something should be done, means nothing is being done. The poor in france didn't start with bread riots. They begged and pleaded and asked nicely first, and while lots of people thought something should be done to help them, nothing was.

Thank you for getting over the line.

3 hours ago

kbelder

>...is literally the same as asking why they can't just eat cake.

You are unequivocally wrong. You probably mean 'similar' instead of 'literally the same'.

31 minutes ago

bloppe

Maybe this is a silly question, but why can't they just eat cake?

an hour ago

lazyasciiart

If you’re genuinely wondering; it’s because cake is not a nutritionally complete food and will also not cure cancer.

an hour ago

bloppe

I'm pretty sure it's in the cancer-curing section of the new food pyramid

an hour ago

strongpigeon

Being worried that people choose to channel their energy into actions that undoubtedly make their situation worse rather than have a chance of finding a solution is not the same. Or I guess it depends on how you decide to view things as being "literally the same".

3 hours ago

sophacles

Worry is not an action to making something better.

People will take actions when the threat is against their livelihood, health and homes, particularly when there is no action being taken on their behalf. Their risk assessment may be different than yours.

3 hours ago

MiguelX413

They don't really have another choice do they.

3 hours ago

GOD_Over_Djinn

The legal system is owned from top to bottom by the ruling class. You will not be able to use it to loosen their death grip on society. They will not allow it.

36 minutes ago

malfist

And if that's not enough that they own the legal system, they've also setup a shadow legal system where they have even more control called arbitration

31 minutes ago

ChoGGi

I have some lovely brioche if you'd prefer.

37 minutes ago

rkomorn

It is the more suitable replacement for bread, after all.

Too bad she never said it, though.

35 minutes ago

nothinkjustai

I don’t think it’s surprising - some people already consider the actions of AI execs and tech companies to be synonymous to violence. Like, comparing something like this to destroying the livelihoods of millions of people, a lot of people would consider the latter far worse.

Temperature is certainly going up, but it definitely hasn’t reached historic levels yet lol.

4 hours ago

_bohm

Structural violence is the term most commonly used for this

https://en.wikipedia.org/wiki/Structural_violence

3 hours ago

closeparen

I do not think that marketing products and services that do useful work is “violence.”

an hour ago

hungryhobbit

Illegally mass surveying Americans, and mass murdering people in other countries is "useful work"?

Because Anthropic just lost their US government contract (AND got slapped with a completely false order that prevents them from working with any government agency) because they wouldn't do the above ... and then OpenAI slid right in and said "yeah, we can do that".

16 minutes ago

malfist

Useful work like selecting an all girls school in Iran for triple taps?

Useful work like generating mountains of deepfake misinformation?

30 minutes ago

gravisultra

Here's the head of research at OpenAI saying "MORE. Don't stop." to the genocide of Palestinians. He still works there.

https://x.com/QudsNen/status/1806729161840476598

an hour ago

GOD_Over_Djinn

We can’t vote our way towards a better future. The corrupt MAGA and DNC institutions strangle any nascent grassroots movement in the crib. And we cannot make them relinquish their death grip on our country with only bare hands.

Seriously shocked that this is the aspect of this moment in history that you choose to focus on, and not the absurd levels of violence perpetrated by the ruling classes against common people.

44 minutes ago

Analemma_

Altman keeps on telling people he’s going to take away their jobs. He says that because it gets cred in tech circles, but in America this is an existential threat, not much different from telling someone “I’m going to break your kneecaps”. Of course some subset of people are going to respond with violence.

The sheer tone-deafness of AI marketing is going to come back to bite us very hard. This is probably just the beginning.

4 hours ago

2dfs

Yep. Just wait until a large group of people (talking millions of people at once) lose their jobs. They will want someone to blame.

And I have no sympathy because this joker has been pushing people to the edge with his hyping.

an hour ago

xienze

Yeah part of me thinks the reason we know all their claims are bullshit is because you’d have to be pretty dense to think that you could promise eliminate >50% of jobs in many high value sectors within 12-18 months and _not_ expect to create more than a few people who’d have nothing to lose…

an hour ago

outside1234

There was a rumor going around Silicon Valley that if ICE came to San Francisco in force that Mark Zuckerberg's house was going to go up in flames in retaliation. You will be surprised to learn that the oligarchs talked to Trump and they did not come.

42 minutes ago

jmyeet

I'm not saying throwing a MOlotov cocktail is ok. It's not. I think most people are analyzing the incident as being indicative of the times we're living in, particularly with the warehouse fire.

But where people are "OK with violence" is with state violence.

State violence include police violence (>1000 people are killed every year in the US by police), prison violence, violently rounding up immigrants and putting them in concentration camps, criminalizing homelessness, denying people life-saving medical care, evictions while landlords collude to raise rents, genocide, sending random people to a maximum security prison in a foreign country (ie CECOT), mass shootings, going with a firearm to a protest to instigate an incident and get a legal kill, intentionally creating the opiod crisis and so on.

For a large number of people some or all of these incidents will get a reaction somewhere between "thoughts and prayers" and "no, it's good actually".

Compare the state's reaction to one healthcare CEO being murdered and the perpetrators that are implicated in the Epstein files. Epstein himself was known to authorities since the 1990s and got an absolutely sweetheart deal in 2008.

So I'd say the real problem is what people view as violence and who's allowed to do it, seemingly without oversight or consequences of any kind most or all of the time.

an hour ago

cyanydeez

uh, the president of the united states just threatened to nuke a country.

What kind of weird world are you living under...

34 minutes ago

plorkyeran

AI company marketing is pretty overwhelmingly "we're going to take away your job and leave to you starve on the streets". People concluding that the public face of this is their enemy who must be stopped is just a really unsurprising outcome.

3 hours ago

rvz

That is what Ilya (and many other employees) (fore)saw.

They did not want a target painted on their backs or being involved with the company responsible for mass job displacement.

Let's hope that SF doesn't turn into a free-for-all after the IPOs, since the silliest thing is for everyone to move to SF and buy up the houses and then the have-not's realise who got rich.

I'd donate that money away or give the employees (who have nothing) a one-time bonus / raise like the five-guys owner [0] to not be a target.

[0] https://www.theguardian.com/us-news/2026/mar/27/five-guys-ce...

3 hours ago

outside1234

I don't condone it, but I understand the anger.

The billionaire class has enabled armed masked police in our streets, endless layoffs, basically don't pay taxes at any reasonable percentage, and basically have rigged politics with Citizens United.

Given that, I can see how people are resorting to 18th century French tactics.

an hour ago

seanlinehan

The top 1% of income earners pay 40% of all the federal taxes collected. The top 25% pay 89% of taxes.

Net of transfers, 60% of households receive more from government transfers than they pay in taxes.

The idea that rich people don't pay taxes is just not correct. The entire system is basically rich people subsidizing everybody else through byzantine distributional systems.

34 minutes ago

lokar

There is no ability to accumulate and hold wealth without a stable society. That means broad rights, democracy and limits to inequality.

Stop acting as if taxation if theft, it’s the fee that allows everything else to function.

4 minutes ago

hn_acc1

The top 1% also owns something like 70% of all the wealth, IIRC. The should be paying MORE than 40% of all the taxes.

7 minutes ago

watwut

What is happening is that they are becomming richer and lower ranks are becomming poorer. Simply, they are so much richer that the little fraction they pay on taxes looks big.

2 minutes ago

danny_codes

GINI is still going up. That means we are getting less equal over time. The entire system is subsidized by the rich because nobody else has any money! By definition rich people have to pay.

If we have a pool of $100 and I take $99 and you get $1, and then I get taxed $5 and you get taxed $0, I still have almost everything. Is this.. unfair to me?

It's in fact the opposite of what you said: everyone else is subsidizing the rich, who have gamed the system to live extravagant lifestyles. Eventually this will lead to a revolution and all us rich people will be beheaded. It's the normal outcome of this sort of thing.

27 minutes ago

DrProtic

Maybe because people got used to violence being used against them?

All this violence against the innocent in various places and levels, and you think it’s weird that people are fine with violence used against a billionaire conman?

an hour ago

gorgoiler

Flip it round: if you have $999,999,999 then would it not be rational to expect random violence against oneself? I’m not saying it’s justifiable, just that it is prudent to expect to be targeted by crazies.

Flip it again: as a crazy, isn’t it reasonable to enact violence against Johnny Nine Nines? If he’s so innocent, how come his house is behind two security fences?

To be a little more reductive: my house is made of gold bricks so I hired an extra-legal anti-marauder militia, but now the marauders see me as a fair fight because I chose extra-legal militia instead of cops and judges… game on and QED.

an hour ago

glitchc

While reprehensible, are we certain this is not a false flag operation? It is apt to garner a great deal of sympathy in the right circles.

5 minutes ago

0cf8612b2e1e

One thing I have idly wondered is how much do the ultra rich protect themselves from theft or kidnapping. Is it just not a real concern?

If Taylor Swift owns a dozen homes, does she have full time security guards at each one? Or just accept some amount of burglary may occur? Do they go everywhere with a guard? Only to public events?

4 hours ago

bombcar

It varies and they don't talk about it (obviously) but you can glean things from various sources. The more "public" the ultra rich are, the more they'll have security, especially noticeable security.

The silent or unknown ones will often still have something (usually a requirement of their or their company's insurance).

Once you graduate from "2, 3, 5 houses" to "mansions" you will have staff at each one, even if relatively bare-bones.

3 hours ago

2dfs

Yeah but theyre useless if a large organised group shows up.

an hour ago

sleepybrett

hell they will probably join the mob instantly.

42 minutes ago

strongpigeon

I once knew a guy that used to be head of physical security for Bill Gates. He has body guards with him all the time and a sizable security team at his home in Medina. You wouldn't believe the amount of lunatics that show up at his home unannounced and claim he promised them money (or are a relative of him somehow).

3 hours ago

lamasery

Well look they forwarded his email ten times as requested so it seems pretty clear that he does owe them money.

an hour ago

sleepybrett

i once did a little project for the home in medina, i never went on site but i did visit the office of his property management company. Dozens of people for managing the properties and on-site staff for each as well as, i think, bgc3 but not the b&mgf.

To hear tell from my coworkers that did go on site the security was insane, the media apparatus was insane (like a dvr for every channel running 24x7 so the family could call up whatever, wherever they were at any time). This is back in like 2010ish, before the marriage blew up.

34 minutes ago

hnthrowaway0315

For a start, they have bodyguards and rarely go into public without the right protection. They also went through a huge amount putting up security and cybersecurity (like I know one who sets up so many hops between endpoints that Microsoft banned his account). Even most of their employees don't know where they are and where they plan to be, unless they choose to do so. Ofc I guess there is always a way to probe, but people who do random killing rarely has the skills/mental to do that.

an hour ago

ciupicri

> accept some amount of burglary may occur?

From https://edition.cnn.com/2025/05/13/entertainment/kim-kardash...

> Kim Kardashian, testifying in the trial of the burglars accused of tying her up and robbing her at gunpoint nearly nine years ago, told a Paris court on Tuesday that she “absolutely thought” her assailants would kill her.

> “I have babies, I have to make it home, I have babies,” Kardashian recalled pleading with the armed men, who had broken into her hotel room while she slept during Paris Fashion Week in 2016.

> Facing her alleged attackers for the first time since the heist, the billionaire reality TV star detailed how she was robbed of nearly $10 million in cash and jewelry, including a $4 million engagement ring – gifted to her by her then-husband Kanye West – that was never recovered.

3 hours ago

MontyCarloHall

I don't think most people in tech are quite aware of the level of visceral AI hatred amongst non-techies. I've personally witnessed the worst Thanksgiving dinnertable fight I've ever seen (after someone revealed that their recipe was AI-generated, a couple people literally spat out the food they were enjoying and threw their plates in the trash), and a divorce (a very solid marriage between two people who were once both staunchly anti-AI unraveled within weeks after one of them changed their tune and adopted AI at work).

4 hours ago

tptacek

I operate in at least one social circle that is heavily not-technical (local politics) and I do not see this at all.

an hour ago

kube-system

My experience is somewhat in the middle -- I see educated non-technical people who are strongly against AI because they see it as polluting, "wasting water", and harmful to society. Although many use it anyway.

I could totally believe uneducated or less well-adjusted people reacting in the above way, though.

30 minutes ago

pesus

People in politics aren't that dissimilar to tech bros (especially AI ones) in terms of world view.

17 minutes ago

tptacek

People in "local politics" are random neighbors, almost none of whom are "in politics" in the colloquial sense.

16 minutes ago

pesus

Fair enough, but I still think it at least somewhat applies to people who are willing to get involved in any kind of political process beyond the very basics or perhaps some special interest groups.

2 minutes ago

lbarrow

Spitting your food out because the AI generated the recipe is so clearly irrational that I chuckled a bit on reading that

4 hours ago

dirkc

People talk about AI getting things wrong all the time, why is it "so clearly irrational" to be doubtful of a recipe that might include ingredients that can make you sick?

4 hours ago

VectorLock

Because I hope that someone who's hands were required to assemble the recipe didn't blindly add ingredients like "bleach" if the AI happened to hallucinate them.

4 hours ago

stvltvs

A naive hope perhaps, but this ignores the risk of LLMs just creating a bad recipe based on the blind combination of various recipes in their training data.

3 hours ago

VectorLock

As the parent comment said the people seemed to be enjoying the food otherwise so the LLM didn't create an unpalatable combination, and I can't think of any combination of edible and unharmful ingredients that might combine to something harmful (when consuming a reasonable amount)

3 hours ago

xmprt

This is exactly what makes it dangerous. Food can taste ok but actually cause you to get sick. Not all bacteria is going to taste off. I'm assuming you're not a chef because if you were then you'd know how absurd your statement is.

For a super simple example, if you don't properly handle or cook raw meat then you risk getting sick even though the food might not immediately taste bad. Maybe that's obvious to you but might not be to the person preparing the food. Another example: Rhubarb pie is supposed to be made with the leaves and not the stalk because the stalk is poisonous and can cause illness. Just kidding, it's actually the other way around but if you were just reading a ChatGPT recipe that made that mistake maybe you wouldn't have caught it.

35 minutes ago

psvv

If meat was involved, the cooking time may have been unsafe if other precautions weren't taken by the cook (like checking the internal temperature).

41 minutes ago

defen

let's take a second to think about the threat vectors here. The two obvious ones I can think of are: "AI hallucinates and tells you to put non-food into the food" and "AI hallucinates and gives you unsafe prep instructions" (e.g. "heat the chicken to an internal temperature of 110 degrees"). For both of those, it's not clear why "random recipe from an internet blog" is safer than something the AI generates. At some level if someone is preparing your food you need to trust that they know how to prepare food, no matter where they're getting their instructions from.

3 hours ago

kube-system

People who do not understand or even use AI are not in a position to even begin "thinking about threat vectors". That isn't how they've come to their worldview, at all.

26 minutes ago

daveguy

Yeah, but I would trust a human writing a blog not to suggest heating chicken to 110F because the human writing the blog understands that they are taking responsibility for that recipe... The AI LLM model doesn't have a clue about responsibility except to regurgitate feel-good snippets about responsibility.

an hour ago

newZWhoDis

>because the human writing the blog understands

Bold assumption

an hour ago

strongpigeon

Because it assumes the person actually making the food has no common sense?

4 hours ago

therouwboat

We had billion dollar AI company install vending machine that was giving stuff away for free, so maybe AI users don't have common sense.

3 hours ago

bloody-crow

This is an experiment they ran and were prepared to lose money on. It seems perfectly reasonable for an AI company to test their products in adversarial conditions to have a better understanding of its flaws and limitations.

2 hours ago

catlikesshrimp

Fantastic history I hadn't heard, april fools day included

https://www.pcgamer.com/software/ai/anthropic-tasked-an-ai-w...

38 minutes ago

wpm

If they're asking an LLM for a recipe, they don't.

3 hours ago

pixel_popping

My wife does it all the time, and it's actually decent.

3 hours ago

baggy_trough

That's quite an assertion.

33 minutes ago

bloody-crow

That's just pure nonsense. My partner is very competent cook and she invents new recipes and experiments all the time. I don't see why she can't use LLM output as an inspiration to combine with her own expertise, sense of taste, and preferences to come up with an excellent dish.

2 hours ago

s1artibartfast

Someone once try to feed me dinner from a recipie they found on the internet. I punched their lights out and then called the cops.

17 minutes ago

steve1977

People get things wrong all the time as well, so I wouldn't trust them either.

4 hours ago

happytoexplain

People get things wrong in a different, more observable/predictable way. Sure, we are easily tricked dummies and we can't know if a human is right or wrong, but our human-trust heuristics are highly developed. Our AI-trust heuristics don't exist.

4 hours ago

steve1977

I mean I had people serve me expired food and chicken that was half raw. The latter I could observe, the former I couldn't so easily. Both were things that could have made me sick.

3 hours ago

happytoexplain

For sure. I'm not defending human perfection, I'm defending human caution (Disclaimer: The format of the preceding sentence was chosen without AI assistance).

3 hours ago

mikestew

Dunno about you, but I like the increased viscosity in my sauces when I use glue:

https://www.bbc.com/news/articles/cd11gzejgz4o

3 hours ago

ikkun

I could see being concerned about food safety; I wouldn't trust an AI recipe to tell me how long/what temperature to cook chicken, and I might not trust someone who uses AI to generate recipes to know either.

4 hours ago

kbelder

An appropriate response might be asking "Hey, I don't trust AI... what's the recipe?"

The described action seems performative and emotional, as it they were ideologically opposed to AI. Like spitting out food because it was prepared by a caste you found unclean.

2 minutes ago

ctoth

Hi! I love to cook! I also use AI to brainstorm recipes sometimes! Wanna try asking Claude, ChatGPT, Gemini, or even Grok what temperature chicken needs to be cooked to? I just asked Claude: 165°F (74°C) internal temperature.

Where does this come from?

3 hours ago

ikkun

if you ask that question alone, AI is most likely to get it right, but the usual pitfalls of AI apply; they sometimes randomly get things wrong, people are more likely to miss wrong information when it's surrounded with correct information, and LLMs are specifically good at making text that seems correct on the surface. and in my experience, people often use AI specifically because they don't have a lot of knowledge in an area. if you do already know plenty about cooking, I'm sure using AI is probably fine, I just see it as a red flag.

cooking is also a form of art, with a strong social aspect. using AI for it has a similar ick factor to using generative AI for pictures. I'm not saying I immediately distrust anyone using it, but I do think it's a sign that maybe the person cares a bit less about what they're doing.

3 hours ago

miloignis

Arguably, that's wrong - not because it's unsafe, but because it's not the best temperature for any part of the chicken I know of. I'm a big J. Kenji López-Alt and Serious Eats fan, and 165 is too hot for good chicken breast and too cool for good dark meat: https://www.seriouseats.com/chicken-thigh-temperature-techni...

3 hours ago

happytoexplain

I can't tell if you're criticizing the parent or are innocently asking how Claude knows the temperature for chicken.

To be clear in the case of the former: Harm data points have approximately one trillion times the weight of no-harm data points, as a rule of thumb.

3 hours ago

stvltvs

Even if it can give the right answer when asked, will it necessarily account for that in a recipe it generates? A beginning cook may not know enough to ask.

3 hours ago

lbarrow

Yea, I suppose that is fair regarding cook timings.

3 hours ago

pixel_popping

but was it done with GPT-5.4 xhigh with an adversarial loop?

4 hours ago

layer8

I interpret it as an expression of disgust. Similar to how people will stop reading and throw away a good book when they learn the author is a morally reprehensible person.

3 hours ago

wak90

Like, I wouldn't spit the food out.

But I would be disgusted. Someone told me they planned their vacation with an llm and I couldn't help but express disdain for this friend of mine.

Why are we outsourcing creativity and research and interest in discovery to an llm?

3 hours ago

thevinter

Probably because the person wasn't interested in planning their vacation and wanted just to enjoy the end result?

Let's not assume different people find the same parts of the process enjoyable.

3 hours ago

bloody-crow

Really don't get this take. I really hate vacation planning and would outsource this part in a heartbeat. My partner does this for me currently and she seems enjoy it quite a bit, but if she wasn't, the LLM-generated plans I've tried out of curiosity were equally as good.

2 hours ago

lostmsu

> Why are we outsourcing creativity and research and interest in discovery to an llm?

This is also weird. I hate planning vacations, but I like going to them.

2 hours ago

dvfjsdhgfv

Really? I can think of a few reasons I wouldn't trust AI-generated recipes.

2 hours ago

misiti3780

lol = if you're against AI recipes, you have bigger problems.

4 hours ago

ajross

The very fact that your takeaway from that story was "look at how dumb my enemies are" is why this is a conflict worth worrying about.

Are you right? Yeah, basically. Are you going to laugh at your stupid neighbors until they burn your house down in rage? Maybe? You don't treat fear with malice.

4 hours ago

happytoexplain

I mostly agree that it's an overreaction. However, "irrational" is a really bad choice of word. Every non-technical person understands that sometimes AI says wrong things - like, random, crazy wrong things, not just a little off. It's just a general rule kept in the back of the mind. Food is easily in that realm of "be careful". Did the AI produce a recipe that would be harmful to you and the cook didn't notice? Almost certainly not. So, sure, they were being over-cautious. But "irrational"? No, no, no. It's definitely rational.

Look at what you're writing.

"Doing X is so clearly irrational that I chuckled a bit."

Please don't perpetuate the image of the elitist techie. That is what was just firebombed.

4 hours ago

s1artibartfast

there is almost nothing seriously dangerous about food, particularly everyday food.There are a handful of niche things that are seriously dangerous, like cooking Fugu or Poison mushrooms with special preperation.

I think this says more about how neurotic and paranoid people are.

11 minutes ago

TehCorwiz

Well, Sam Altman and Jensen Huang are going around bragging about how many people they're going to push out of employment. Might have something to do with it.

4 hours ago

NickC25

This.

Sam's got 3 billion net worth.

Jensen's got 165 billion to his name.

They are giddy about taking jobs away, and both are engaged in "tax reduction strategies" and suck up to Donald Trump.

You wonder why people are pissed?

a few seconds ago

satvikpendem

I think you're just in a strange bubble of people because those are absolutely comical responses to learning of AI. I do know some people who are for or anti AI to a stronger extent, but most of those I know simply don't give a shit, they'll use AI if it's there, such as for their job or to ask an LLM questions, but otherwise not think about it.

2 minutes ago

layer8

From a recent NBC News poll, “the only topics that were less popular than AI were the Democratic Party and Iran”: https://www.nbcnews.com/politics/politics-news/poll-majority...

3 hours ago

snielson

My wife runs a food blog and sometimes uses AI to come up with recipes she tests on us first. One of the best dishes she’s ever made (and one of the best I’ve ever eaten) was pork with an apricot sauce. The pork was fine, but the sauce was absolutely incredible! I’d put it on any kind of meat. Funny thing is, I don’t even like apricots, but the sauce was amazing. My wife does have one advantage, which is that she knows when the AI has hallucinated something crazy and makes appropriate adjustments. I guess it's like anything. AI can be a big help to those who already have a threshold level of background knowledge in a field but can cause big problems for those who don't.

3 hours ago

layer8

You can’t write something like this and not share the recipe.

3 hours ago

yfw

The only thing we hear is your jobs are going to be gone but we are still only giving you healthcare if you work.

15 minutes ago

happytoexplain

There is very strong anti-AI sentiment among "techies" too. It's just not absolute or generalized (AI is a huge umbrella term).

4 hours ago

metalliqaz

You might call me a "techie" and I both use AI and have very strong anti-AI sentiment. I don't think this is a contradiction, because I believe while the technology itself is not bad, the way that people use it definitely is.

People trust AI outputs in ways they should not. They don't understand its sycophantic design and succumb to AI psychosis. They deploy it in antisocial ways, for war, or spam, or scams. They use it to justify layoffs. They use it as a justification to gobble up public funds. They use it to power their winner-take-all late-stage capitalism economy. It goes on and on.

3 hours ago

whimblepop

> I both use AI and have very strong anti-AI sentiment.

Me, too. The AI hype machine involves some really bad ideas, the amount of money being poured into "AI" right now distorts everything, public understanding of how these tools work is low, and a lot of contemporary uses both by corporations and governments are irresponsible, dangerous, and likely to produce or reproduce harmful biases and reduce the accountability of humans for crucial decisions and outcomes.

At the same time, it's useful for me at work, and I'm curious about it. I sometimes enjoy using it. It lets me do things I didn't have time for before. It eliminates some procrastination problems for me. I think its use in computing is also likely to be increasingly mandatory for the near-to-moderate term, so it's probably good for me to get used to using it and thinking about it and looking for new useful things it can do for me.

And my own experiences in using AI are part of what drive my anti-AI sentiment as well! I see it do completely insane and utterly stupid things pretty much every day, both in my personal life and in my professional life. I have a visceral awareness of its unreliability because I use it frequently.

I should hope that as hackers we can muster some understanding and respect both for LLM users and for people with hard "anti-AI" stances. Even if you're "pro-AI" to the core (whatever that means), it's worth understanding the most serious and well-considered arguments of critics of LLMs and the contemporary "AI" race. You might even find, as someone who uses and enjoys using LLMs, that you agree with many of them.

2 hours ago

slopinthebag

I agree completely. The way it's marketed and used is a big part of my distaste, the other part is big tech / AI companies and their actions and ethics. It's why I'm a huge supporter of open source and locally run models, and I am moving most of my workflow to things that I can run on my own machine, or at least on a GPU that I can rent from a plethora of providers.

3 hours ago

linkage

Politics really is a substitute for religion in America

4 hours ago

kelnos

In secular America at least. Most people in the US are religious, many of them fervently so.

And quite a few of them like to mix their religion with politics.

4 hours ago

elephanlemon

Frankly I think a lot of these people are politics first. How else do you explain the dissonance between Jesus’s teachings and their political opinions?

4 hours ago

MiguelX413

Their politics are perfectly in line with their Christian-themed cult.

2 hours ago

lazyasciiart

Yes but when they’re not, they choose politics. See: Catholics right now.

an hour ago

misiti3780

this is true, but thankfully, religion is declining in America. although if people are replacing it with politics, maybe we need another revival

4 hours ago

leosanchez

Religious people can be anti-AI too.

4 hours ago

MontyCarloHall

Indeed, but the rage I've seen during political fights at family gatherings (and another politics-induced divorce) pales in comparison to the rage I saw in these two anecdotes. The worst political debates I've seen involved raised voices and some name calling, not spitting food and smashing plates. The only other political divorce I've seen slowly simmered over a few years after Trump was first elected, not in a literal matter of weeks.

4 hours ago

Kon5ole

The remarkable part of your anecdote is the behavior. Seems to me some humans nowadays are less tolerant of any difference in opinion, AI is just the current reason to pick a fight.

Wonder why that is, and if we'll grow out of it peacefully.

3 hours ago

lazyasciiart

It’ll quiet down once we make it illegal and/or justification to be committed to an asylum to have opinions we don’t like - the way it was in the old, tolerant days.

an hour ago

bloody-crow

Nowadays? It's always been the case, the only thing that changed is the subject.

2 hours ago

Kon5ole

I think it's gotten way, way worse over the past 20 or so years. I recall having friends spanning several political parties, countries and religions hanging out with barely a sense of tension in the room.

17 minutes ago

newZWhoDis

Portland?

an hour ago

kbelder

That is really funny.

20 minutes ago

LooseMarmoset

From my own perspective, the "visceral hatred" isn't so much at AI (which I use almost exclusively to generate funny pictures of myself and coworkers) but at the executives that view it as a way to enshittify society.

turning myself (an overweight bearded guy) into an animated hula dancer and turning my coworker into the Terminator and sinking into molten steel don't seem to inspire the same hatred. unless you don't like hula dancers.

4 hours ago

rishabhaiover

This was obviously a fictional thanksgiving dinner. Nobody is this geezed up about AI assistance.

3 hours ago

TripleTree

I would absolutely stop eating a meal if I learned AI was involved in creating it. I suppose I wouldn't literally spit it out but I wouldn't take another bite.

3 hours ago

s1artibartfast

Why? What if you found out a human was involved in creating it?

9 minutes ago

stvltvs

Nobody in your circle of friends/acquaintances perhaps.

3 hours ago

rishabhaiover

You're okay with sitting at the rear seat of a car while it drives you around the city though.

2 hours ago

sillyfluke

I must live in the upside down. If there are any ardent anti-AI people I come across they're techies. Whereas non-techies are either oblivious or completely and comically locked-in as caricatured in that South Park episode.

3 hours ago

hnthrowaway0315

TBH people in AI may also resent AI, because they are the first to be impacted by AI. They just don't say openly because frankly no one wants to lose his/her job.

an hour ago

nothinkjustai

Not just non-techies. Plenty of techies share that same visceral hatred. Some of them even use these tools themselves, because it’s a complicated issue with nuances.

4 hours ago

lamasery

Yep, all of us with a clue are keeping our traps shut at work, or even boosting it or slapping it onto projects that don't need it, because this is clearly one of those things where attempting to offer counsel and advice that's contrary to the way the MBA winds are blowing can only hurt your career.

an hour ago

throwanem

Surely there must have been underlying tensions in that marriage.

(I don't feel at all confident in that statement; I am requesting reassurance.)

4 hours ago

MontyCarloHall

They are pretty good friends of mine and I never sensed any tension. It really was a marriage-ending bolt out of the blue, like discovering an affair or severe financial infidelity.

4 hours ago

satvikpendem

As an outsider you wouldn't know though.

4 minutes ago

throwanem

I don't really want to say "thank you." That story, more to the point that I can't find a priori cause to doubt it, makes me glad I'm about to go enjoy a gorgeous spring afternoon full of birdsong and sunshine. But I appreciate your taking the time to follow up.

4 hours ago

gopher_space

I mean the simplest way to look at this is that he's just wrong about the couple being happy.

an hour ago

throwanem

I was married for a decade. Little of that was happy. (We both made the mistake of marrying each other, then compounded it by both being afraid to be first to admit to having noticed.)

Everyone noticed - and of course I've seen it from the other side, too, many times. You can't hide when people are together who don't want to be. That always shows.

an hour ago

lazyasciiart

This is like saying that of course people could tell Ted Bundy was a psychopath, it always shows.

an hour ago

throwanem

One might insightfully argue the whole point of the psychopath is precisely that it doesn't show. I recommend Cleckley, The Mask of Sanity, [1] originally 1941 but prefer his 1988 fifth edition especially for its rather disconsolate preface.

[1] https://gwern.net/doc/psychology/personality/psychopathy/194... - despite the filename, this is the 1988 edition. I like my paper edition (I made my paper edition!) but the PDF will serve well enough for your reference here.

29 minutes ago

alfalfasprout

It's quite prevalent in tech too-- however, folks tend to be quiet because the "use AI for everything or else" hammer is being used across the industry.

4 hours ago

lexandstuff

I've found that most non-tech people are indifferent or, at worst, utterly bored by any mention of AI.

The tech people are the ones that have the strongest opinions one way or the other.

4 hours ago

kbelder

That is my experience, as well.

18 minutes ago

therobots927

Most SV people live in a bubble inside of a bubble. They don’t understand how their words come across to a significant portion of the population. If they did they would shut the fuck up.

4 hours ago

baal80spam

Not sure why you were downvoted so heavily. SV is a bubble if I've seen one.

3 hours ago

rvz

Crypto doesn't get that much hatred, since you don't need to participate in the space even in non-techies circles. But it doesn't affect them and it can be safely ignored in its own bubble.

Mentioning "AI" in non-techies circles is a bad idea. It tells you that many here are in a massive bubble and unaware of the visceral hate against AI because it directly affects them and they cannot opt-out.

Given that AI takes more than it gives back (jobs, energy, water, houses) of course you will get anti-AI activists.

3 hours ago

layer8

Except when you’re the victim of ransomware that extorts you to pay some bitcoin. But it seems that fewer people have encountered that than having AI forced upon them.

3 hours ago

littlestymaar

> after someone revealed that their recipe was AI-generated, a couple people literally spat out the food they were enjoying and threw their plates in the trash

Not entirely unwarranted given the track record of LLMs as a chef though:

https://www.theguardian.com/world/2023/aug/10/pak-n-save-sav...

https://www.bbc.com/news/articles/cd11gzejgz4o

Of course it was two years ago and it's unlikely to happen again, but that's the drawback of the “move fast and break things” attitude: sometimes you've broken public perception and it's hard to fix afterwards.

4 hours ago

mandeepj

> a couple people literally spat out the food they were enjoying and threw their plates in the trash

That was an unnecessarily extreme reaction, like AI 3d printed the ingredients.

4 hours ago

GlibMonkeyDeath

2 hours ago

tedd4u

Trigger warning: AI animation of uncanny-valley Sam Altman "hydra"

an hour ago

jorgonda

Putting millions of people out of work comes with consequences. We are going to see more and more of this.

4 hours ago

niemandhier

"Respice post te! Hominem te esse memento!"

36 minutes ago

rambrrest

This will only get worse imo - regardless of how Sam is perceived - there is anger against AI which is growing amongst the people. I think we as a society need to stop and have the conversation and be more thoughtful about how we integrate AI with everything.

4 hours ago

pixel_popping

I don't think this is possible yet, because many people refuse to think AI would be eventually better than us at practically anything (at least anything virtual), they keep talking about what's "current" while I think it's completely irrelevant for that discussion, people need to assume extreme intelligence and orchestration tools (and robots) will be there, worldwide, it's a *fact*, not just a maybe.

3 hours ago

toraway

It is actually entirely possible to discuss a solution for something that may or may not happen. If a hurricane is approaching, we don't typically require every person to agree the odds of landfall are 100% to start preparing shelters and stockpiling aid nearby. Not everything in the world is about the "AI skeptics" on the internet being dumb and wrong unlike you.

43 minutes ago

classified

Your "fact" is pure vaporware and hallucination.

2 hours ago

pixel_popping

Let's talk about it again in 5 years, but 1-2 years from now, at the very least, coding will be over in the sense that the best models will do it better than the best (or the 99.99%). I don't think I'm hallucinating no, when my own work went from coding+managing+bunch of other stuff to just orchestrating and my output is just insanely higher and I literally have a bunch of friends that went from coding 8h a day to just "pretending to code" and just using a bunch of agents and get paid the same salary for working 30min a day, that's real, not an hallucination.

2 hours ago

classified

> in 5 years

That's literally the same argument that the blockchain gurus made, and each following year it was still 5 years in the future. I'm getting strong Real Soon Now™ vibes.

2 hours ago

cleversomething

Bitcoin was never actually valuable for the average person except if they got lucky by timing the speculation bubbles right, or if they were buying illegal drugs online.

Lots of AI tools already add actual value and they're only getting better. Every software dev I know uses Claude at some level. Whether it will be the next trillion dollar unicorn might be overhype, but in terms of demonstrating its general utility, it's already there. No need to wait 5 years.

an hour ago

pixel_popping

It's really 2 very different things, only the "shilling" might be deja-vu.

an hour ago

sleepybrett

> Every software dev I know uses Claude at some level.

here is my leve:, try to get claude to properly perform some boring boilerplate for me until the tokens run out or i get super angry and then do it myself in a rage.

28 minutes ago

pixel_popping

common, that's very different, that's something current with practical use-cases that are already being implemented across all companies, I don't even know why we compare this with blockchain, blockchain is just some fancy resilient DB with proofs in the end.

2 hours ago

sleepybrett

or elson and fsd.

30 minutes ago

ChoGGi

When you constantly preach on how your company is able to save money by taking away jobs, I have no fucking sympathy for you.

27 minutes ago

supliminal

Can we at least have a little sympathy for these rich CEOs? I mean they’re all Jewish that’s got to count for something right fellas?

9 minutes ago

therobots927

Think occupy Wall Street but cranked up significantly.

That’s what’s coming. Like it or not.

4 hours ago

linkage

I hope "cranked up" was a pun

3 hours ago

Teever

I'm going to be blunt about this.

We're going to see the ultrawealthy become targets of drone attacks conducted by people who have terminal illnesses and nothing to lose.

I predict that we'll see a movement start where people who get diagnosed with a fast acting terminal illness that gives them a few weeks to months of relatively high functionality followed by a quick downward decline -- like say a brain tumour decide to kamikaze against the people they feel have wronged them and their kin gravely.

People will use something like this[0] to evade detection but won't really give a shit if they get caught because they'll be dead in a few months.

Even if they don't have access to such technology they can always just use a firearm like we've seen people try on Trump and Charlie Kirk and that Healthcare CEO guy with relative success.

I'm amazed that Peter Thiel is giving talks about the antichrist at the Vatican. I've seen relatively recent videos of him walking down the street with only a security guard or two[1], and they seem completely unprepared for any sort of attack on them from someone with a firearm or a drone.

It's like these people genuinely don't understand how destructive their actions are viewed as by society and the bubbling resentment and rage that is growing towards them.

I'm not sure what the defense against such a movement is. I guess maybe fixing wealth inequality and giving people at least the impression of greater participation in our democratic system?

This[2] is the vibe right now and it's only growing stronger by the day.

[0] https://www.youtube.com/watch?v=qrZ1aH5gtMU

[1] https://www.youtube.com/shorts/pGHIplhJ8Ek

[2] https://genius.com/25966434

2 hours ago

supliminal

YC S26 here. We are actually working on something like this right now. Contact info in bio.

4 minutes ago

stevenwoo

Ministry of the Future beat you to the punch with victims of human driven climate change shooting down thousands of private planes with drones as protest.

an hour ago

elsonrodriguez

One of the biggest fantasies in that book is that the "protesters" would be so unified and ethical in their plots.

In real life the attacks in response to climate change(and in this case, economic injustice), will be committed by such an uncountable plurality of groups that the violence will seem almost capricious.

8 minutes ago

camillomiller

>> We're going to see the ultrawealthy become targets of drone attacks conducted by people who have terminal illnesses and nothing to lose.

oh nooooo. anyway

an hour ago

fredgrott

how to tell its not AI or AGI..it throws a Molotov cocktail...

4 hours ago

pixel_popping

Yeah, Unitrees wouldn't aim that good.

4 hours ago

sleepybrett

We can only hope that when they reveal the identity of this guy he happens to have a name that overlaps the mario bros. universe.

33 minutes ago

SilentM68

Hmm, that's troubling but predictable.

The idea that AI will bring an age of abundance may be true, but not in the short term. Companies are letting people go, and AI will be blamed for that, whether true or not. For decades the public perception that most Tech Bros have prioritized profits over the wellbeing of the little guy is well established, in my view, in some cases well deserved with no accountability.

It's looking like AI will generate a modern version of the early 1800s Luddite Rebellion where British textile workers destroyed machines that displaced jobs, prioritizing factory owners' profits over workers. They targeted technology and industrialists.

Tech Bros can avoid this by modifying their priorities, prioritize employee rights and lobbying governments to begin implementing some sort of Universal Basic Income of some sort and or provide the means by which people can survive, or the government may start marketing Soylent Green to consumers :(

3 hours ago

pesus

I'd say an important distinction is AI is currently threatening to displace a significantly larger number of jobs across multiple sectors. Whether it can/will actually happen is yet to be seen, but the potential amount of scorned people with nothing to lose is far greater this time.

7 minutes ago

whimblepop

> It's looking like AI will generate a modern version of the early 1800s Luddite Rebellion where British textile workers destroyed machines that displaced jobs, prioritizing factory owners' profits over workers. They targeted technology and industrialists.

It's worth remembering that the way that ended was extremely bloody, particularly for the Luddites themselves. There were a handful of extreme participants, there was a murder, and there was a hell of a lot of violence directed at anyone perceived as a Luddite— even though most actual Luddites themselves mostly avoided violence against other humans.

It would be good if we can somehow avoid such outcomes this time.

3 hours ago

SilentM68

Greed drives most of the current crop of Tech Bros.

I once had the chance to be a Bro, far richer than any of the current ones, thanks to the still secretive and anonymous "original-sn-adjacent cryptographic collective". Things, however, did not work out in my favor thanks to other nefarious third-party actors. So, I know where from I speak.

Any outcome is in the hands of the Tech Bros but by the looks of it, greed drives their every action, so things are not looking good!

:(

2 hours ago

EGreg

I've been saying for years on here...

to the people on HN who are against blockchain but bullish on AI

With blockchain and smart contracts or stupid even memecoins, you can only lose what you voluntarily put in. You had to jump through a few hoops, then maybe you got rugpulled, maybe you became a millionaire.

With AI, regardless of whether you consented or not, you can lose your job, gradually your relationships and sense of purpose. And if some malicious actors want to weaponize it against you, you can lose your reputation, your freedom, get hacked at scale, and much more. The sooner we give biolabs to everyone the sooner someone can create an advanced persistent threat virus online infecting every openclaw machine, or a designer virus with an incubation period of half a year.

And I know what someone on here will always say. There will always be a comment to the effect of "this has always existed, AI is nothing new". But quantity has a quality all its own. Enjoy your AI slop internet dark forest. Until you don't.

4 hours ago

Centigonal

Is your definition of bullish "believes the technology will be widely adopted across society and accrue significant wealth to its owners?" - if so, I think it's very clear how someone could be bullish on AI and not blockchain. You don't have to like AI to see it as an inexorable transformer (ha!) of society and wealth.

Is your definition of bullish "believes the technology is a major net good for society?" - if so, you're comparing two technologies with significant social aspirations that come from very different philosophical backgrounds. While both are techno-optimist, Blockchain is a fundamentally libertarian technology, while generative AI comes from a more utilitarian, capital-focused background. People who value individual freedom above all else will get excited about blockchain and feel mixed-to-negative about AI, while people who want to elevate the overall capability of the human race to the exclusion of anything else will get excited by AI and see blockchain as a parlor trick.

an hour ago

nickvec

https://archive.ph/aoXIY

@dang didn't see this post before posting the archive.ph link at https://news.ycombinator.com/item?id=47722344 - feel free to delete/merge that thread with this one

4 hours ago

rvz

The problem here is that there are no viable solutions to what happens when AI eventually replaces (yes replaces) tens of millions of humans in white collar roles.

All that is being "promised" are vague claims of "abundance". But all I see is this:

"AGI" is going to bring abundance of lots of very angry people and UBI to no-one (because it can never work at a large sustainable scale).

Some people are starting to realise that "AGI" was a grift and a scam and they are not happy about this lie and the insiders knew that and increased spending on security and private bodyguards.

4 hours ago

operatingthetan

I don't think the LLM will produce AGI. Just based on how context windows work, the prompt cycle, etc. LLMs aren't out there thinking about stuff in their spare time. The way they appear to have thoughts and a psyche is purely an illusion.

4 hours ago

fooqux

Something I often think about is how we can barely define what AGI, consciousness, etc are. We may be pretty sure that what we have currently is an illusion, but at which point is the illusion good enough that it no longer matters? Especially with regards to my first question.

It's hard to say it's not X when we can't really define X.

4 hours ago

ethanrutherford

I would personally argue that it's a lot easier to say something definitely isn't x, with confidence, than to say it definitely is. I definitely don't know what the surface of jupiter looks like, but I can pretty confidently say it doesn't look like Kansas. I think the better it gets, the easier it will be to spot the shortcomings, because the gap between what it can do well and what it can't will widen. Anything the technology is fundamentally incapable of ever achieving will be made obvious by the fact that it will simply continue to not achieve it. We may not be able to easily define the totality of what exactly it needs to have to count as AGI, but the further it progresses, the easier it will be to point out individual things it's definitely missing.

3 hours ago

operatingthetan

I'm not saying we can't build it, but what we have right now certainly is not it. Right now context is just a bunch of text. Surely the human mind's context resembles something more like a graph database. What if we could use a database for context?

3 hours ago

andsoitis

> LLMs aren't out there thinking about stuff in their spare time.

Agentic changes the calculus.

4 hours ago

operatingthetan

Explain how? Even if you are using crons or heartbeats to reactivate the model they are still dependent on context windows that are quite small. With frontier models I still have to remind them how stuff works, stuff they forgot or focused on the wrong thing, etc.

Also every AI company is motivated to have us use their models _just enough_ to want to pay for them, but not more than that.

4 hours ago

booleandilemma

It doesn't have to produce AGI and it could still ruin the lives of millions of people. Our society isn't ready for that kind of shock. We can't all be instagram influencers.

4 hours ago

josefritzishere

My first thought was false flag. Is that too cynical?

4 hours ago

foota

I would go for out of touch, not cynical. A lot of people really think AI is the devil.

4 hours ago

risyachka

It will be hard to convince them otherwise when their jobs are replaced with AI, and they are in their late 40s or later - with no time to adjust and to learn new craft.

4 hours ago

polotics

Possible, but unlikely. To organise such a stunt and keep undetected you're going to need other consigliere than what Sam's got I presume.

4 hours ago

josefritzishere

Like another commenter wrote... anyone can cast a fireball. Sam has been called a sociopath by many who know him personally. So it seems more likely than it might be otherwise.

3 hours ago

ReptileMan

Nope. So was mine.

2 hours ago

stevenwoo

It kind of fits with the behavior he exhibited as reported by Farrow in New Yorker article.

an hour ago

boznz

I guess this is what we get when the media and politicians go all in with their AI populist hate. I don't think I've seen a positive AI headline outside of the tech press, and even then they are pretty thin. Abundance and growing the pie for everyone is also an outcome if this is done right.

4 hours ago

acdha

> Abundance and growing the pie for everyone is also an outcome if this is done right.

That’s like saying we don’t need minimum wage or unions because companies choosing to treat workers with respect is also a possible outcome. It’s technically true but once you go from “is this theoretically possible?” to “is this likely?” it becomes obvious that the answer is no. Most of the big AI backers are openly salivating at destroying millions of jobs, and they’re already evading taxes now so they’re not going to be funding UBI willingly — and if you have any doubt, look at where their political spending goes, consistently to the people who are doing their best to remove what small taxes they’re still paying and declaring war on the concept of regulated markets.

an hour ago

lexicality

> Abundance and growing the pie for everyone is also an outcome if this is done right.

Do you genuinely believe there's any chance that's going to happen?

4 hours ago

boznz

I do, because the alternative is unthinkable.

3 hours ago

impossiblefork

Why do you think that the fact that the alternative is unthinkable is a reason it won't happen?

Are you also sure that it is unthinkable to those running these companies? I wouldn't be surprised if these models end up being used for internal security-- that people would try to keep an extremely unequal society stable by surveillance and massive analysis capabilities. I think it's apparently that some use of this sort already occurs and these companies are already participating.

33 minutes ago

nickvec

I would argue that "abundance and growing the pie for everyone" is even more unfathomable given how things are structured currently. The wealth gap will continue to widen until something gives.

an hour ago

DrProtic

Can’t believe your comment is being downvoted.

Covid clearly showed how crisis can only benefit the rich and powerful.

AI being used to cut the headcount can somehow be, good? It will just fill the pockets of the powerful.

an hour ago

array_key_first

Well then given that one side is "the situation remains neutral or very slightly improves" and the other side is "unthinkable atrocities", I think it's only rational to focus on the "unthinkable atrocities" part. Ideally, we should be focusing all our energy into making sure that doesn't happen.

an hour ago

jyounker

Closing your eyes doesn't make the danger go away.

19 minutes ago

senordevnyc

Looking at the last few hundred years of our civilization, absolutely!

3 hours ago

mghackerlady

or, here me out, people are just sick of it? They don't care that their masters are sniffing eachothers ai powered farts to keep the economy afloat on the promise of their obsolescence. Sure, in theory it could be good for them, they can get more work done quickly, but why would they be kept alive if their owners no longer need to rely on them. The ideal business has no expenses, workers are one of those. Combine that with everything being shit nowadays, yeah, I can't blame whoever did this

3 hours ago

archagon

I think the media and politicians are reflecting popular sentiment, not the other way around.

an hour ago