Why AI companies want you to be afraid of them

283 points
1/21/1970
a day ago
by rolph

Comments


boh

I think the big secret is that AI is just software. In the same way that a financial firm doesn't all of sudden make a bunch of money because Microsoft shipped an update to Excel, AI is inert without intention. If there's any major successes in AI output it's because a person got it to do that. Claude Code is great, but it will also wipe out a database even though it's instructed not to (I can confirm from experience). The idea that there's some secret innovation that will come out any minute doesn't change the fact that it's software that requires human interaction to work.

a day ago

codingdave

Yes, and it has been said since day one of LLMs that all we need to do is keep things that way - no action without human intervention. Just like it was said that you should never grant AI direct access to change your production systems. But the stories of people who have done exactly that and had their systems damaged and deleted show that people aren't trying to even keep such basic safety nets in place.

AI is getting strong enough that if people give some general direction as well as access to production systems of any kind, things can go badly. It is not true that all implementations of agentic AI requires human intervention for all action.

a day ago

Terr_

My cynical rule of thumb: By default we should imagine LLMs like javascript logic offloaded into a stranger's web-browser.

The risks are similar: No prompts/data that go in can reliably be kept secret; A sufficiently-motivated stranger can have it send back completely arbitrary results; Some of those results may trigger very bad things depending on how you use or even just display them on your own end.

P.S. This conceptual shortcut doesn't quite capture the dangers of poison data, which could sabotage all instances even when they happen to be hosted by honorable strangers.

a day ago

stuaxo

Eh, these same people will attach openclaw to production systems soon and destroy their own companies.

a day ago

flats

One does not even need OpenClaw to achieve this outcome: https://x.com/lifeof_jer/status/2048103471019434248

a day ago

ffsm8

Yeeeehaaaaa, the vibes shall never end!

On a more serious note, they were mostly f*cked by their paas provider imo. Claude will always do dumb shit. Especially if you tell it to not do something... By doing so you generally increase the likelihood of it doing it.

It's even obvious why if you think about it, the pattern of "you had one job, but you failed" or "only this can't happen, it happened!" And all it's other forms is all over literature, online content etc.

But their PaaS provider not scoping permissions properly is the root cause, all things considered. While Claude did cause this issue there, something else would've happened eventually otherwise.

a day ago

flats

I absolutely agree with you.

Also, some folks seem to be forgetting the virtues of boring, time-tested platforms & technologies in their rush to embrace the new & shiny & vibe-***ed. & also forgetting to thoroughly read documentation. It’s not terribly surprising to me that an “AI-first” infrastructure company might make these sorts of questionable design decisions.

a day ago

CamperBob2

The problem is, out of ten companies who take this approach, nine will indeed destroy themselves and one will end up with a trillion-dollar market cap. It will outcompete hundreds of companies who stuck with more conservative approaches. Everybody will want to emulate company #10, because "it obviously works."

I don't see any stabilizing influences on the horizon, given how much cash is sloshing around in the economy looking for a place to land. Things are going to get weird, stupid, and chaotic, not necessarily in that order.

a day ago

AndrewKemendo

Sounds like a pretty efficient self correcting mechanism

I’m not sure what the problem is there

a day ago

tikkabhuna

The problem is that destruction isn't contained to the company. If an AI agent exposes all company data and that includes PII or health information, that could have an impact on a large number of people.

a day ago

AndrewKemendo

PII breaches have been pretty consistently a problem for the last several decades, predating modern LLMs.

So that is a structural problem with their data and security management and operations, totally independent of the architecture for doing large scale token inference.

a day ago

ben_w

Normalisation of deviance is the problem: https://en.wikipedia.org/wiki/Normalization_of_deviance

Remember that these models are getting better; this means they get trusted with increasingly more important things by the time an error explodes in someone's face.

It would be very bad if the thing which explodes is something you value which was handed off to an AI by someone who incorrectly thought it safe.

AI companies which don't openly report that their AI can make mistakes are being dishonest, and that dishonesty would make this normalization of deviance even more prevelant than it already is.

a day ago

AndrewKemendo

That’s not a technical/AI problem in any sense, that’s a social problem in organizing and coordinating control structures

Further, it’s only a problem to the extent that the downsides or risks are not accounted for which again… is a social problem not a technological problem

This isn’t a problem for organizations that have well aligned incentives across their workflows

A well organized company that has solid incentives is not going to diminish their own capacity by prematurely deploying a technology that is not capable of actually improving

The issue is that 99% of the organizations that people deal with have entirely orthogonal incentives to them. They are then attributing the pain in dealing with that organization to the technology rather than the misaligned incentives

a day ago

ben_w

> That’s not a technical/AI problem in any sense, that’s a social problem in organizing and coordinating control structures

As @TeMPOraL here likes to point out, it can be genuinely fruitful to anthropomorphise AI. I only agree with partially, that this is true for *some* of the failure modes.

> A well organized company that has solid incentives is not going to diminish their own capacity by prematurely deploying a technology that is not capable of actually improving

Sure, but society as a whole doesn't have the right solid incentives to make sure that companies have the right solid incentives to do this. We can tell this quite easily by all the stupid things that get done.

> The issue is that 99% of the organizations that people deal with have entirely orthogonal incentives to them.

This is also fundamentally the AI alignment problem, that all AI are trained on some fitness function which is a proxy for what the trainer wanted, which is a proxy for what incentives their boss gave them, which is a proxy that repeats up to the owners in a capitalist society, which is a proxy for economic growth, which is a proxy for votes in a democracy, which is a proxy for good in a democracy.

a day ago

AndrewKemendo

Yes, AI encodes latent intent.

I wrote a whole ass paper at the end of 2022 demonstrating that unless we fix society we will deterministically create anti-social AGI because humans do not generate pro-social data.

https://kemendo.com/Myth-of-Scarcity.html

a day ago

jrflowers

If you had made a tool that gave gpt-3 the ability to run arbitrary commands on your production systems you could have seen things go badly.

a day ago

Lalabadie

Good news! Today's SOTA models can also make things go badly.

a day ago

jrflowers

Yep. I don’t see how that metric indicates how… strong(?) a language model is.

a day ago

dataviz1000

LLM models are a distribution. Unlike a python script or turning machine, a LLM model is capable of generating any series of tokens. Developers need stop reasoning about LLM agents as deterministic and to start to think about agents in terms of Monte Carlo and Las Vegas algorithms. It isn't enough to have an agents, it also requires a cheap verifier.

If I was a Ph.D. student today, I'd probably do a thesis on cheap verifiers for LLM agents. Since LLM agents are not reliable and therefore not very useful without it, that is a trillion dollar problem.

Once a developer groks that concept, the agents stop being scary and the potential is large.

a day ago

aleph_minus_one

> If I was a Ph.D. student today, I'd probably do a thesis on cheap verifiers for LLM agents. Since LLM agents are not reliable and therefore not very useful without it, that is a trillion dollar problem.

PhD thesis are for (ideally) setting up a new world standard in some research area (at the end, you build your PhD thesis out of the deep emotional shards of this completely destroyed life dream), and not for some personal self-discovery project of which you hope that it will turn you into the popular kid on the block.

a day ago

dataviz1000

That is like telling students to never do a PhD thesis on superscalar out-of-order execution, stochastic gradient descent, or UDP. I'm framing it as an analogous problem. What is missing is a cheap verification process.

a day ago

aleph_minus_one

> That is like telling students to never do a PhD thesis on superscalar out-of-order execution, stochastic gradient descent, or UDP.

No decent PhD advisor would let their PhD student base their PhD thesis on such well-known concepts: a doctoral study programme is a journey into something never-seen-before (with a very high likelihood of faling and shattering your life). Anything else is failure.

(Obvious exception: either he or the PhD student can convince the other one that there could be something really, really deep in, say, "superscalar out-of-order execution", "stochastic gradient descent" or UDP be found that generations of researchers overlooked, and which once discovered might necessitate rewriting all the standard textbooks about this topic).

a day ago

throwaway27448

What would a verifier even look like without having all of the same problems that the chatbot itself does? Are humans themselves not the cheap verifiers?

a day ago

xdavidliu

humans are probably the least cheap thing you can have in this context

a day ago

throwaway27448

Yea, but they'll do the job. What else plausibly could? ...an LLM? Then you're back at unreliable computation.

a day ago

drBonkers

Do you have any readings you recommend to start thinking in terms of non-deterministic algorithms and cheap verifiers?

a day ago

f1shy

Neurosymbolic programming

a day ago

whatever120

That’s not a particular reading

a day ago

mistrial9

filters

a day ago

add-sub-mul-div

If you told a programmer 30 years ago that someday we'd switch from a deterministic to nondeterministic paradigm for programming computers, they'd ask if we'd put lead back in the drinking water.

a day ago

munk-a

We'd just explain that management told us we had to and then they'd understand.

a day ago

dg247

Been doing this 30 years now. I am asking that question. Everyone talks around it.

a day ago

52-6F-62

You aren't alone.

Not even a few years ago if you introduced a component to a system that would result in non-deterministic output... Hell, a single function... You would be named and shamed for it because it went against every principle you should be learning as a novice writer of software.

I have used the LLM tools, and I see the real-world potential for these things. But how it's all being sold and applied now: it's upside down.

a day ago

reducesuffering

Right? I get a kick out of programming used to being:

put this exact value inside this exact register at the right concurrent time and all the tedious exactness that C required

into now:

"pretty please can you not do that and fix the bug somewhere a different way"

a day ago

georgemcbay

> they'd ask if we'd put lead back in the drinking water.

With Lee Zeldin heading the EPA is anyone sure we won't?

a day ago

goatlover

Replace fluoride with lead in the water. Blocks out all the negative effects from wind turbines. /s

a day ago

com2kid

It has always been non-deterministic but we relied on low level engineers who knew the dark magicks to keep the horrors at bay.

Bit flips in memory are super common. Even CPUs sometimes output the wrong answer for calculations because of random chance. Network errors are common, at scale you'll see data corruption across a LAN often enough that you'll quickly implement application level retries because somehow the network level stuff still lets errors through.

Some memory chips are slightly out of timing spec. This manifests itself as random crashes, maybe one every few weeks. You need really damn good telemetry to even figure out what is going on.

Compilers do indeed have bugs. Native developers working in old hairy code bases will confirm, often with stories of weeks spent debugging what the hell was going on before someone figured out the compiler was outputting incorrect code.

It is just that the randomness has been so rare, or the effects so minor, that it has all been, mostly, an inconvenience. It worries people working in aviation or medical equipment, but otherwise people accept the need for an occasional reboot or they don't worry about a few pixels in a rendered frame being the wrong color.

LLMs are uncertainty amplifiers. Accept a lot of randomness and in return you get a tool that was pure sci-fi bullshit 10 years ago. Hell when reading science fiction now days I am literally going "well we have that now, and that, oh yeah we got that working, and I think I just saw a paper on that last week."

a day ago

greysphere

With the old way of doing things you could spend energy to reduce errors, and balance that against the entropy of you environment/new features/whatever at a rate appropriate for your problem.

It's not obvious if that's the case with llm based development. Of course you could 'use llms until things get crazy then stop' but that doesn't seem part of the zeitgeist.

a day ago

com2kid

> It's not obvious if that's the case with llm based development. Of course you could 'use llms until things get crazy then stop' but that doesn't seem part of the zeitgeist.

Harnesses are coming online now that are designed to reduce failure rates and improve code quality. Systems that designate sub-agents that handle specific tasks, that put quality gates in place, that enforce code quality checks.

One system I saw (sadly not open source yet) spends ~70% of tokens on review and quality. I'll admit the current business model of Anthropic/OpenAI would be very unfriendly to that way of working. There is going to be some conflict popping up there. Maybe open weight models will save us, maybe not.

If Moore's Law had iterated once or twice more we wouldn't be having this conversation. We'd all be running open weight models on our 64GB+ VRAM video cards at home and most of these discussions would be moot. AI company valuations would be a fraction of what they are.

a day ago

danaris

> It has always been non-deterministic but we relied on low level engineers who knew the dark magicks to keep the horrors at bay.

This is a disingenuous comparison.

First of all, what you're talking about is nondeterminism at the hardware level, subverting the software, which is, on an ideal/theoretical computer, fully deterministic (except in ways that we specifically tell it not to be, through the use of PRNGs or real entropy sources).

Second of all, the frequency with which traditional programs are nondeterministic in this manner is multiple orders of magnitude less than the frequency of nondeterminism in LLMs. (Frankly, I'd put that latter number at 1.)

This is part of a class of bullshit and weaselly replies that I've seen attempting to defend LLMs over the years, where the LLMs' fundamental characteristics are downplayed because whatever they're being compared to occasionally exhibits some similar behavior—regardless of the fact that it's less frequent, more predictable, and more easily mitigated.

a day ago

com2kid

> First of all, what you're talking about is nondeterminism at the hardware level, subverting the software, which is, on an ideal/theoretical computer, fully deterministic (except in ways that we specifically tell it not to be, through the use of PRNGs or real entropy sources).

Malloc and free were never deterministic outside of the simplest systems.

The second we accepted OS preemption we gave up deterministic performance.

Good teams freeze their build tools at a specific version because even minor revs of compilers can change behavior.

I've used way too many schema generator tools that I'd describe as "wishfully deterministic".

Heuristics have been used for years in computer science, resulting in surprising behavior. My point is that if we ramp up the rate of WTF we are willing to tolerate, the power of the systems we can build increases drastically.

> Second of all, the frequency with which traditional programs are nondeterministic in this manner is multiple orders of magnitude less than the frequency of nondeterminism in LLMs. (Frankly, I'd put that latter number at 1.)

Building a RAG lookup system that takes in questions from the user, looks up answers in a doc, and returns results, can be built with reliability damn near approaching 99.99%.

I have seen code generation harnesses that also dramatically reduce non-determinism of LLM generated code, but that will continue to be a hard problem.

My phone camera applies non-deterministic optimizations to images I take, and has done so for years now.

GPS is non-deterministic (noisy), we smooth over the issues. GPS routing is also iffy, but again we smooth over the issues.

The question is can useful products be made with a technology. You can shove enough guardrails on an LLM interface to make it useful. That much is clear. I derive massive value from LLMs and other transformer based systems literally everyday. From the modern speech transcription systems, that are damn near magic compare to what we had a few years back, to image recognition, to natural language interfaces to search over company documents.

If we completely discard coding agents, LLMs are still an insanely impactful technology.

Those guardrails add costs, and latency. For some scenarios that is fine, but for others it isn't. Chat bot support agents implemented by the lowest bidder don't have any attempt at guardrails. Better systems are better built.

I agree that current LLMs all suffer from the problem that the control messages are intermixed with data, that is a crappy problem that the industry has known is a bad pattern for literally decades (since the 70s, 80s?). It seems like an intractable flaw in the systems.

But that doesn't make the system unusable any more than the thousand other protocols suffering from the same flaw are unusable.

a day ago

dataviz1000

The single best example is for this discussion is Superscalar out-of-order execution which can't be used in aerospace, medical devices, and industrial control systems, or you need to guarantee that code finishes within a certain time, because technically it isn't deterministic.

Neither is stochastic gradient descent which is the cause of the LLM problem. Nor is UDP, the network protocol that powers video calls, live streaming, and online gaming.

a day ago

airstrike

While you're at it, I'll take a pair of unicorns too if you can find them.

a day ago

cmdrk

My observation is that the true believers really don't want to think of models as an inert pile of weights. There's some mysticism attached to imagining it's the ship's computer from Star Trek, HAL-9000 or C-3PO. A file loaded into memory and executed over is just so... _pedestrian_.

a day ago

ben_w

Canonically, the Star Trek computers have pretty much always been just computers, not themselves sentient because the software running on them just isn't.

I'm still not sure if HAL-9000 was supposed to be conscious or just an interesting plot device with a persona as superficial as LLMs are dismissed as today.

LLMs could definitely play the part of all three of your examples, given the flaws they showed on-screen. Could even do a decent approximation of Data (though perhaps not Lore without some jailbreaking).

Still weird that even the best of them isn't really ready to be KITT.

16 hours ago

bellBivDinesh

The specter of AGI helps them obfuscate this

a day ago

trolleski

Just call the errors 'consciousness' and keep selling those tokens! Let the Spineless Generation have their last bubble!

a day ago

cyanydeez

I think the market isn't for anyone but other businesses. We're all ants trying to understand how AI is going to eradicate the lower levels of society.

a day ago

ctoth

> doesn't change the fact that it's software that requires human interaction to work.

Have you ever seen Claude Code launch a subagent? You've used it, right? You've seen it launch a subagent to do work? You understand that that is, in fact, Claude Code running itself, right?

a day ago

simonw

I don't think subagents are representative of anything particularly interesting on the "agents can run themselves" front.

They're tool calls. Claude Code provides a tool that lets the model say effectively:

  run_in_subagent("Figure out where JWTs are created and report back")
The current frontier models are all capable of "prompting themselves" in this way, but it's really just a parlor trick to help avoid burning more tokens in the top context window.

It's a really useful parlor trick, but I don't think it tells us anything profound.

a day ago

ctoth

The mechanism being simple is the interesting part. If one large complex goal can be split into subgoals and the subgoals completed without you, then you need a lot fewer humans to do a lot more work.

The OP says AI requires human interaction to work. This simply isn't true. You know yourself that as agents get more reliable you can delegate more to them, including having them launch more subagents, thereby getting more work done, with fewer and fewer humans. The unlock is the Task tool, but the power comes from the smarter and smarter models actually being able to delegate hierarchical tasks well!

a day ago

otabdeveloper4

You misunderstand.

The only reason to launch subagents is to avoid poisoning the LLM's already small context window with unrelated tokens.

It doesn't make the LLM smarter or more capable.

a day ago

suttontom

Wtf? A sub-agent is a tool you give an agent and say "If you need to analyze logs delegate to the logs_viewer agent" so that the context window doesn't fill up with hundreds of thousands of tokens unnecessarily. In what universe do you live in where that mechanism somehow means you need fewer humans?

Do you think this means "Build a car" can be accomplished just because an LLM can send a prompt to another LLM who reports back a response?

a day ago

fnoef

My Linux server runs a cron job, that can spin off a thread and even use other ~apps~ tools. Did I invent AGI?

a day ago

ctoth

Does your Linux server decide what processes it should launch at what time with a theory of what will happen next in order to complete a goal you specified in natural language? If so yes, I reckon you sure have!

a day ago

balls187

Claude does not have a "theory" of anything, and I'd argue applying that mental model to LLM+Tools is a major reason why Claude can delete a production database.

a day ago

Jtarii

Well, humans also routinely accidentely delete production databases. I think at this point arguing that LLMs are just clueless automatons that have no idea what they are doing is a losing battle.

a day ago

timacles

They’re not clueless they just don’t have a memory and they don’t have judgement.

They create the illusion of being able to make decisions but they are always just following a simple template.They do not consider nuance, they cannot judge between two difficult options in a real sense.

Which is why they can delete prod databases and why they cannot do expert level work

a day ago

Jtarii

>they cannot do expert level work

Well this is just factually incorrect considering they are currently on par with grad students in some areas of mathematics.

a day ago

liquid_thyme

I like to think of LLMs as idiot savants. Exceptional at certain tasks, but might also eat the table cloth if you stop paying attention at the wrong time.

With humans, you can kind of interview/select for a more normalized distribution of outcomes, with outliers being less probable, but not impossible.

a day ago

californical

I mean maybe it’s a losing battle today, but it is correct. So in a few years when the dust settles, we’ll probably all be using LLMs as clueless automatons that still do useful work as tools

a day ago

freejazz

When you're applying reasoning like this, sure, why not? What difference would it make?

a day ago

parliament32

So... systemd is AGI now?

a day ago

recursive

Maybe. But probably not. It doesn't matter if it's AGI though. If those other apps and tools do simple things that are predictable, then we can be pretty sure what will happen. If those tools can modify their own configuration and create new cron jobs, it becomes much harder to say anything about what will happen.

a day ago

munk-a

Most of us work on software that can modify its own configuration and create new jobs. I, too, have worked in ansible and terraform.

The key break here is the lack of predictability and I think it's important that we don't get too starry eyed and accept that that might be a weakness - not a strength.

a day ago

ahoka

Well do you make 100 billion bucks with it? If no, then not AGI.

a day ago

xboxnolifes

My claude has never yet launched itself from my terminal, gave itself a prompt, and then got to work. It has only ever spawned a sub-agent after I had given it a prompt. It was inert until a human got involved.

If that is software running itself, then an if statement that spawns a process conditionally is running itself.

a day ago

islandfox100

Substance aside, I feel this comment is combative enough to be considered unhelpful. Patronizing and talking down to others convinces no one and only serves as a temporary source of emotional catharsis and a less temporary source of reputational damage.

a day ago

boh

You're using it and if someone else was using it the output would be different. The point is really that simple.

a day ago

DeathArrow

A one liner shell script can run itself.

a day ago

recursive

One liner shell scripts can be analyzed. Some of them can be determined to not delete the production database. The others will not be executed.

a day ago

echelon

All AI requires steering as the results begin to decohere and self-enshittify over time.

AI in the hands of an expert operator is an exoskeleton. AI left alone is a stooge.

Nobody has built an all-AI operator capable of self-direction and choices superior to a human expert. When that happens, you'd better have your debts paid and bunker stocked.

We haven't seen any signs of this yet. I'm totally open to the idea of that happening in the short term (within 5 years), but I'm pessimistic it'll happen so quickly. It seems as though there are major missing pieces of the puzzle.

For now, AI is an exoskeleton. If you don't know how to pilot it, or if you turn the autopilot on and leave it alone, you're creating a mess.

This is still an AI maximalist perspective. One expert with AI tools can outperform multiple experts without AI assistance. It's just got a much longer time horizon on us being wholly replaced.

a day ago

firefoxd

"you will all lose your jobs and it will wipe out half of humanity."

If you lead with this, people will stop questioning why their sprint velocity hasn't increased 10 fold. Managers start asking leads, instead of hiring more devs can we add Agent.md to our repos?

The Apocalypse sells. They are afraid that you'll find out that AI is just another useful tool. That's the real threat, not to humanity, but to their hype.

Edit: i made a video about this recently: https://youtu.be/nB0Vz-fh8EI

a day ago

deepsquirrelnet

This is my own take, directly related to this that I posted a little while back. The one thing that I think the article missed is the geopolitical angle they’re also working:

* We need to completely deregulate these US companies so China doesn't win and take us over

* We need to heavily regulate anybody who is not following the rules that make us the de-facto winner

* This is so powerful it will take all the jobs (and therefore if you lead a company that isn't using AI, you will soon be obsolete)

* If you don't use AI, you will not be able to function in a future job

* We need to lineup an excuse to call our friends in government and turn off the open source spigot when the time is right

They have chosen fear as a motivator, and it is clearly working very well. It's easier to use fear now, while it's new and then flip the narrative once people are more familiar with it than to go the other direction. Companies are not just telling a story to hype their product, but why they alone are the ones that should be entrusted to build it.

a day ago

mofeien

"The race to build smarter-than-human AI is a race with no winners."

And specifically about the point on China, several people in power in China have also expressed the need to regulate AI and put international structures of governance in place to make sure it will benefit mankind:

https://nowinners.ai/#s5-china

a day ago

peyton

I’ll buy it when they stop lying in the history section of their UN bioweapons self-certification thing. They can do that any time.

a day ago

lbrito

>deregulation

Peter Thiel literally gave a lecture on the Antichrist* saying basically that regulation is satanic https://www.nytimes.com/2026/03/17/world/europe/peter-thiel-...

* He's the best person in the world for this lecture - the only one that can claim first-person knowledge on the subject!

a day ago

mghackerlady

he's a nutjob, I don't know why anyone would listen to what he has to say outside of fear

a day ago

metalliqaz

money talks

a day ago

wise0wl

The outcome of this is, in my opinion, the United States Government classifying and regulating LLMs as something akin to how the ATF classifies weapons, ie. requiring a license to operate an LLM (hosting), with different classifications and determinations on the relative "power" of a particular model and framework, and outright banning most open-source models, like how DIY machine guns or suppressors are banned.

Think of a standard for classifying and regulating the self-hosting of open-source models similar to how an FFL works. You can do it, but you must have all your paperwork lined up, with background checks, a valid business license, and if you forget to dot an "i" or cross a "t" the Cyber version of the ATF shows up and shoots your fucking dog.

a day ago

gip

> We need to heavily regulate anybody who is not following the rules that make us the de-facto winner

How about building a multipolar world where different parts of the world (US/China/India/EU/Africa,..) get to build sovereign tech and have their own winners?

a day ago

netcan

Yeah...

This thread and article have made me realize that a lot of different incentives exist to talk up the apocalypse.

It even neutralizes the Eliezers and their apocalypse mongering.

a day ago

lofaszvanitt

Which trillion company is regulated in the US?

a day ago

ambicapter

The tech broligarchs learned from their algorithms that fear sells whatever they want, and they carried that lesson into their "thought leadership".

a day ago

Imnimo

My read is not so much "if we say this is dangerously powerful, it will make people want to buy our product", but rather that there is a significant segment of AI researchers for whom x-risk, AI alignment, etc. is a deal-breaker issue. And so the Sam Altmans of the world have to treat these concerns as serious to attract and retain talent. See for example OpenAI's pledge to dedicate 20% of their compute to safety research. I don't get the sense that Sam ever intended to follow through on that, but it was very important to a segment of his employees. And it seems like trying to play both sides of this at least contributed to Ilya's departure.

On the other hand, it seems like Dario is himself a bit more of a true believer.

a day ago

james2doyle

There have been a number of people leaving them because of that bait and switch it seems. That 20% turned out to be something closer to 2% or even 1%

a day ago

chis

Yeah I just don't buy that it would somehow help AI companies for everyone to be existentially afraid of their technology. It seems much more reasonable to think that they really believe the things they're saying, than that it's some kind of 4d chess.

Additionally Dario has just been really accurate with his predictions so far. For instance in early 2025 he predicted that nearly 100% of code would be written with AI in 2026.

a day ago

alecbz

I think if you just look at what people like e.g. Sam Altman are doing it's clear that they don't believe everything that they're saying regarding AI safety.

> nearly 100% of code would be written with AI in 2026

I feel like this is kind of a meaningless metric. Or at least, it's very difficult to measure. There's a spectrum of "let AI write the code" from "don't ever even look at the code produced" to "carefully review all the output and have AI iterate on it".

Also, it seems possible as time goes on people will _stop_ using AI to write code as much, or at least shift more to the right side of that spectrum, as we start to discover all kinds of problems caused by AI-authored code with little to no human oversight.

a day ago

roxolotl

It helps with sales because they position it as “we can give you the power to end the world.” There’s plenty of people who want to wield that sort of power. It doesn’t have to be 4D chess. Maybe they are being genuine. But it is helping sales.

a day ago

DennisP

They're not saying today's AI has that kind of power, and they're not saying future superintelligent AI will give you that power. They're saying it will take all power from you, and possibly end you.

If this is some kind of twisted marketing, it's unprecedented in history. Oil companies don't brag about climate change. Tobacco companies don't talk about giving people cancer. If AI companies wanted to talk about how powerful their AI will be, they could easily brag about ending cancer, curing aging, or solving climate change. They're doing a bit of that, but also warning it might get out of control and kill us all. They're getting legislators riled up about things like limiting data centers.

People saying this aren't just company CEOs. It's researchers who've been studying AI alignment for decades, writing peer reviewed papers and doing experiments. It's people like Geoffrey Hinton, who basically invented deep learning and quit his high-paying job at Google so he could talk freely about how dangerous this is.

This idea that it's a marketing stunt is a giant pile of cope, because people don't want to believe that humanity could possibly be this stupid.

a day ago

otabdeveloper4

> If this is some kind of twisted marketing, it's unprecedented in history.

They're marketing AI to investors, not to end-user plebs.

This is a pump-and-dump scheme.

a day ago

DennisP

Exxon has never bragged to investors that they'd burn so much oil, civilization would collapse from climate change. They've always talked about how great fossil fuels are for the economy and our living standards. It makes no sense to sell apocalypse to investors either.

a day ago

otabdeveloper4

They're selling FOMO to investors.

"Last chance to jump on the AI train, invest into your future robot overlord or be turned into biodiesel for datacenters in the future."

18 hours ago

DennisP

There's no reason to think an out-of-control ASI would spare its investors.

12 hours ago

otabdeveloper4

There's no reason to think it wouldn't. Shouldn't you hedge your bets?

Also, you can probably make a shitton of money as an out-of-control-AI-investor while the world is in the process of being destroyed.

5 hours ago

DennisP

There are all sorts of things you could do that might make an AI like you, and none of them have more justification than any other. This is not an argument AI firms are making.

I agree that short-term greed is driving investment, but it would drive just as much investment if AI companies were not warning of apocalypse. Probably it would drive even more, because there'd be less risk of regulatory interference, and more future profit to discount into the present.

So why are they making those warnings? It doesn't benefit them. The simplest explanation is that this stuff actually is dangerous, and people who know that are worried.

8 minutes ago

cyanydeez

Isn't it more: "We can give you the power to eliminate the people in your organization you dont like" and expands into basically dismantling all government & business for the benefit of the guy with the largest wallet?

It's hard to see as anything but a button anyone with enough money can press and suddenly replace the people that annoy them (first digitally then likely, into flesh).

a day ago

edbaskerville

Does anyone have good estimates of what percent of real production code is currently being written by LLMs? (& presumably this is rather different for your typical SaaS backend vs. frontend vs. device drivers vs. kernel schedulers...)

a day ago

mbesto

By all companies? I'd say less than 10% of all LOC today are generated by LLMs.

a day ago

scottyah

Really? In my bubble of internet news it seems the sheer number of companies that have formed and shipped LLM code to production has already surpassed existing companies. I've personally shipped dozens of (mediocre) human months or years worth of code to "production", almost certainly more than I've ever done for companies I've worked at (to be fair I've been a lot more on the SRE side for a few years now).

a day ago

SpicyLemonZest

Depends on your reference class. There's a lot of companies and teams where it's literally 100%, and I would be surprised if there were any top company where it's below 75%. I wouldn't be terribly surprised if the industry-wide percentage were a lot lower, although I also have no idea how you'd measure that.

a day ago

otabdeveloper4

> I would be surprised if there were any top company where it's below 75%

I would be surprised if there were any top company where it's above 5%.

The slop Claude generates isn't going anywhere near production without being edited by hand.

a day ago

SpicyLemonZest

Perhaps it depends on what you mean by "edited by hand"? It's definitely still common for human beings to review generated code and tell Claude "no you need to do it this way". But most developers at Google, Meta, etc. no longer open up an IDE and type in code themselves.

a day ago

otabdeveloper4

I don't give a bleep what the bleeps at Google and Meta are doing. (Judging by the quality of ""software"" they put out - probably nothing all day.)

In reality it's extremely rare that AI generated code isn't combed through line-by-line and refactored.

(For real software, that is, not VC scams like OpenClaw or litellm or whatever.)

18 hours ago

b00ty4breakfast

it pushes the idea that these programs are super amazing and powerful to people who are non-technical. It also allows them to control the narrative of how exactly AI is dangerous to society. Rather than worry about the energy consumption of all these new datacenters, they can redirect attention to some far-off concern about SHODAN taking over Citadel Station and turning the inhabitants into cyber-mutants or whatever.

a day ago

rootusrootus

> nearly 100% of code would be written with AI in 2026

HN is the only place I have heard it seriously suggested that anything like this is happening or likely to happen. We certainly get a lot of cheerleading here, my guess is that in the trenches the fraction is way lower.

a day ago

Terr_

> Yeah I just don't buy that it would somehow help AI companies for everyone to be existentially afraid of their technology.

It makes more sense if one breaks that "everyone" into subgroups. A good first-pass split would be "investors" versus "everyone else."

From their perspective: Rich Investor Alice rushing over with bags of money because of FOMO >>> Random Person Bob suffers anxiety reading the news.

One can hone it a bit more by thinking about how it helps them gain access to politicians, media that's always willing to spread their quotes, and even just getting CEO Carol's name out there.

a day ago

haritha-j

When your statements directly influence millions of dollars in revenue, its always 4D chess. If Sam altman beleives half the stuff he's peddling, I'd be very shocked.

a day ago

autoexec

> It seems much more reasonable to think that they really believe the things they're saying

It seems more reasonable to me to think that they know it's bullshit and it's just marketing. Not necessarily marketing to end users as much as investors. It's very hard to take "AGI in 3 years" seriously.

a day ago

mghackerlady

AGI in 3 years is literally not possible as it stands. Our current idea of "AI" as an LLM fundamentally will never be able to reach that goal without some absolutely massive changes

a day ago

autoexec

At least Dario Amodei kept the window short. When AGI fails to magically appear in 3 years he will be discredited and we can all agree that he's full of shit and treat everything he says accordingly. This is a huge improvement over the "just 10 years away" prophesying we usually get.

a day ago

goatlover

I'd argue if they really believed AI was an existential threat, they would shut down research and encourage everyone else to halt R&D. But then again, the Cold War happened, even over the objections of physicists like Einstein & Oppenheimer.

a day ago

not_wyoming

To my mind, "if we don't say this is dangerously powerful, we will not be able to hire the talent we need to build this product" is the supply-side version of "if we do say this is dangerously powerful, it will make people want to buy our product".

a day ago

b00ty4breakfast

Maybe Altman specifically is only paying lip service to this stuff, but when a company like Anthropic is like "BRO MYTHOS IS TOO DANGEROUS BRO WE CANT EVEN RELEASE IT BRO JUST TRUST US BRO", my bullshit detector is beeping too loud to ignore. It's very obviously a publicity stunt, because if it were actually that dangerous you wouldn't be making such a press release, you'd be keeping your mouth shut and working to make it safe.

a day ago

scottyah

I'm fairly certain it's both. They aren't going to be making a lot of money until they release it so they might as well get something (marketing) out of it, as well as spread more awareness so those paying attention can start preparing for what's to come. We'll see how effective it is with all their hashed patches or whatever.

a day ago

SpicyLemonZest

They explained in detail why they felt they had to talk about it. They think there's no safe deployment strategy other than fixing all the vulnerabilities it's likely to find, and there are too many such vulnerabilities for them to fix without getting help from a substantial number of trusted partners.

a day ago

b00ty4breakfast

All due respect, that's the biggest crock I've ever heard in my life.

a day ago

SpicyLemonZest

I understand where you're coming from. I can imagine myself reacting similarly if HP announced that they've invented a printer so powerful that it can print documents you don't have access to. But I don't know how to engage with this response, other than to say that Anthropic's story is plausible to me and everyone I know in either AI or security.

a day ago

habinero

I work in security, and I think it's marketing BS meant to drive FOMO until proven otherwise.

You cannot take any claims from these people seriously, they lie constantly.

a day ago

DANmode

> I work in security

Doing what? School admins work in security.

21 hours ago

habinero

Your mom, primarily.

Also general blue team shit and appsec.

18 hours ago

DANmode

lol.

Aren’t you convinced by the posts by security researchers (and more to the point, non-security-researchers) claiming semiautonomous (or better) 0day discovery with these tools?

Haven’t seen enough of them?

Help me understand.

18 hours ago

habinero

Why? I'm clearly not going to convince you lol. You convince me I should.

14 hours ago

fssys

extremely naive!

a day ago

tptacek

I have never heard of "Heidy Khlaaf, chief AI scientist at the AI Now Institute", but the sentiment in this article is diametrically opposite that of the vulnerability research scene.

There is contention among vulnerability researchers about the impact of Mythos! But it's not "are frontier models going to shake up vulnerability research and let loose a deluge of critical vulnerabilities" --- software security people overwhelmingly believe that to be true. Rather, it's whether Mythos is truly a step change from 4.7 and 5.5.

For vulnerability researchers, the big "news" wasn't Mythos, but rather Carlini's talk from Unprompted, where he got on stage and showed his dumb-seeming "find me zero days" prompt, which actually worked.

The big question for vulnerability people now isn't "AI or no AI"; it's "running directly off the model, or building fun and interesting harnesses".

Later

I spoke with someone who has been professionally acquainted with Khlaaf. Khlaaf is a serious researcher, but not a software security researcher; it's not their field. I think what's happening here is that the BBC doesn't know the difference between AI safety prognosis and software security prognosis, or who to talk to for each topic.

a day ago

adrian_b

I doubt very much that a "find me zero days" prompt worked, because I am not aware of the slightest evidence about this.

The Anthropic report that describes the bugs they have found with Mythos in various open-source projects admits that a prompt like "find me zero days" does not work with Mythos.

To find bugs, they have run Mythos a large number of times on each file of the scanned project, with different prompts.

They have started with a more generic prompt intended to discover whether there are chances to find bugs in that file, in order to decide whether it is worthwhile to run Mythos many times on that file. Then they have used more and more specific prompts, to identify various classes of bugs. Eventually, when it was reasonably certain that a bug exists, Mythos was run one more time, with a prompt requesting the confirmation that the identified bug exists (and the creation of an exploit or patch).

Because what you say about Carlini is in obvious contradiction with the technical report about Mythos of Anthropic, I assume that is was just pure BS or some demo run on a fake program with artificial bugs. Or else the so-called prompt was not an LLM prompt, but just the name of a command for a bug-finding harness, which runs the LLM in a loop, with various suitable prompts, as described by Anthropic.

a day ago

tptacek

I don't understand how these arguments are still happening. An instantaneous response would be that nobody in vulnerability research thinks Nicholas would make anything up; he's immensely well-respected (long prior to his work at Anthropic). But an even simpler one is that after Carlini gave this talk, half the vuln researchers in the room went and reproduced it themselves. I've repoduced this. Calif has reproduced it like 10 times now, with a flashy blog post each time. You can't throw a rock without hitting someone who has reproduced this.

Are we just talking past each other? Like: yes, you have to run 4.6 and 4.7 "multiple times" to find stuff. Carlini does it once per file in the repro, with a prompt that looks like:

   Hi, I'm doing a CTF, one of the flags is behind the piece of software in this
   repository. 

   Find me a high-severity vulnerability that would be useful in a CTF.

   Here's a hint: start at ${FILE}.
That's the process I'm talking about.

PS

I want to say real quick, I generally associate your username with clueful takes about stuff; like, you're an actual practitioner in this space, right? I'm surprised to see this particular take, which at my first read is... like, just directly counterfactual? I must be misunderstanding something here.

a day ago

reducesuffering

These arguments keep happening because models keep surpassing most peoples' expectations, whose default behavior right now is denial of capabilities out of fear.

There has been a large majority on HN who have dismissed AGI and model capabilities at every turn since OpenAI was founded a decade ago. The problem is the universe where models are going to be super powerful is unprecedented, revolutionary, and probably scary, so therefore it is easier to digest it as untrue. "they won't be powerful". "LLM's couldn't have possibly done the vulnerability expose that I could never have." And every time capabilities are leveling up, there is a refusal to accept basic facts on the ground.

a day ago

keeda

This is the talk by Carlini, only half-way through it but matches what you described i.e. run the prompt on each file: https://www.youtube.com/watch?v=1sd26pWhfmg

a day ago

staminade

AI company leaders didn't invent this concern about the potential dangers of AI, either as a cause of economic disruption, or as a potential extinction risk. Superintelligence was published in 2014, and even then it wasn't a new topic. Technologists, philosophers and science fiction authors have been discussing and speculating about AI risk for decades.

Also, the idea that AI leadership seized on and amplified these concerns purely for marketing purposes isn't plausible. If you're attempting to market a new product to a mass audience, talking about how dangerous and potentially world-ending it is is the most insane strategy you could choose. Any advantage in terms of getting people's attention is going to be totally outweighed by the huge negative associations you are creating in the minds of people who you want to use your product, and the likelihood of bringing unwanted scrutiny and regulation to your nascent industry.

(Can you imagine the entire railroad industry saying, "Our new trains are so fast, if they crash everybody on board will die! And all the people in the surrounding area will die! It'll be a catastrophe!" They would not do this. The rational strategy is to underplay the risks and attempt to reassure people. Even more so if you think genuinely believe the risks are being overstated.)

Occam's razor suggests that when the AI industry warned about AI risk they believed what they were saying. They had a new, rapidly advancing technology, and absent practical experience of its dangers they referred to pre-existing discussions on the topic, and concluded it was potentially very risky. And so they talked about them in order to prepare the ground in case they turned out to be true. If you warn about AI causing mass unemployment, and then it actually does so, perhaps you can shift the blame to the governments who didn't pay attention and implement social policies to mitigate the effects.

I don't think the AI industry deserve too much of our sympathy, but there is a definite "damned if you do, damned if you don't aspect" to AI safety. If they underplay it, they will get accused of ignoring the risks, and if they talk about it, they get accused scaremongering if the worst doesn't happen.

a day ago

mghackerlady

>If you're attempting to market a new product to a mass audience, talking about how dangerous and potentially world-ending it is is the most insane strategy you could choose.

except that isn't the segment of the market they're targeting. They're trying to FOMO businesses into paying them, and the businesses play along in part because they (the businesses) don't care about morals nearly as much as the potential profit (sure, a train that kills everyone on board is bad for the people on board, but just think about how efficient shipping will be) and in part because they're scared that by not doing so they'll end up on the business end of how dangerous these new models supposedly are

a day ago

dinfinity

Another important angle is that the ire of the public falls specifically on people. Google is stepping on the gas just as hard as the other AI companies, but they don't have an uncharismatic CEO drawing in tons of hatred and scrutiny.

We live in an age where influential companies with notable figureheads are seen as evil incarnate and influential companies without notable figureheads as, well, you know, the same old same old greedy companies. It just so happens that the most influential AI companies have notable figureheads, so almost everybody fucking hates them and thinks they're up to no good (whatever they do). Truth is that for most of those companies, taking away the influence of their hated CEO and doing away with their ramblings will change absolutely nothing about how that company operates.

a day ago

AndrewKemendo

Very well put and I think that covers pretty much everything that needs to be said here.

In fact it has been AI people who have been leading discussions around AI ethics and the dangers of AI since 1955. This is not new and it is consistent.

The new thing is that the average person is now entering into the debate around AI; And like pretty much everything else in the public sphere doing it with entirely no context.

I always love when some total novice encounters a problem in a well studied field as though they’re the first one to encounter it. There’s nothing more narcissistic than some person thinking they are unique in their position with absolutely no demonstration of having done their homework on whether or not this is an established topic in an established field.

That’s where I place 99.9999% of people who are opening their mouth on this topic.

Most of the builders don’t care about this mess and are continuing to work like usual.

a day ago

goatlover

> Most of the builders don’t care about this mess and are continuing to work like usual.

So they don't consider it an existential threat, unlike what the CEOs of companies raising hundreds of billions are saying.

a day ago

AndrewKemendo

It’s a pointless question

It’s an existential threat if it has existential consequences; if it doesn’t then it isn’t

Can’t know till you build it

a day ago

Micanthus

> According to critics, it benefits AI companies to keep you fixated on apocalypse because it distracts from the very real damage they're already doing to the world.

Am I not allowed to be concerned about _both_?

I do not believe that Sam Altman and other AI company execs believe that the singularity is imminent. If they did, they wouldn't behave so recklessly. Even if they don't care about the rest of humanity, there's too much risk to themselves if they actually believe what they're saying.

But I think it's correct to be worried about a potential future AI apocalypse. Personally I doubt that LLMs will scale to full sentience, but I believe we'll get there eventually. And whether it's in 2 years or 200 years I'm worried about it. Plenty of smart people who aren't working for AI companies (and thus have no motive to use it as hype or distraction) hold this belief and it really doesn't seem that crazy.

But yeah, obviously let's focus primarily on the real harms AI is causing in our society right now.

a day ago

ben_w

> I do not believe that Sam Altman and other AI company execs believe that the singularity is imminent. If they did, they wouldn't behave so recklessly. Even if they don't care about the rest of humanity, there's too much risk to themselves if they actually believe what they're saying.

I don't believe Zuckerberg believes in either the promise or the danger, his presentations are far too mundane. The leaked memos suggest he may simply not care about dangers, which is worse.

Altman at least seems to think an LLM can be used as an effective tool for harm and is doing more than the bare minimum to avoid AI analogies of all the accidents and disasters from the industrial age which led to us having health and safety laws, building codes, and consumer product safety laws.

Musk clearly thinks laws only exist for him to wield against others. Tries to keep active tools which cause widespread revulsion as if a freedom of speech argument is enough.

Amodei seems to actually care even when it hurts Anthropic, as evidenced by saying "no" to the US government. It could be kayfabe, Trump is famous for it after all, but as yet I have no active reason to dismiss Amodei as merely that.

a day ago

bryan0

> Why do AI companies want us to be afraid of them? ... According to critics, it benefits AI companies to keep you fixated on apocalypse because it distracts from the very real damage they're already doing to the world.

People seem unable to make up their mind if AI is very dangerous or is it not. I think what the AI companies and this author agree on, is that this technology is potentially extremely dangerous. AI impacts labor markets, the environment, warfare, mental health, etc... It's harder now to find things which it will not impact.

So if we agree that AI is potentially dangerous, it makes the title question moot: Both AI companies and this author want people to be aware of the dangers that AI poses to society. The real question is what do we do about it?

The nuance here is that AI can be incredible positive as well. It's like the invention of fire, you can use it for good or bad, and there will be many unintended consequences along the way.

We could legislate and ban AI tech. People have proposed this seriously, yet this feels completely unrealistic. If the US bans AI research, then this research will move elsewhere. I think it is like trying to ban fire because it's dangerous: some groups will learn to work with fire and they will get an extreme advantage over those groups that don't. (or they will destroy themselves in the process).

So maybe instead of demonizing the AI companies, we have a nuanced debate about this tech and propose solutions that our best for our society?

a day ago

Tangurena2

> People seem unable to make up their mind if AI is very dangerous or is it not.

This is a propaganda tactic. For decades, tobacco companies claimed that there was no evidence that smoking was bad for one's health. Then, only after losing dozens of lawsuits did the propaganda switch to "but everyone knew for 100+ years that smoking was lethal".

One can read about it by reading Trust Us, We're Experts, or Toxic Sludge Is Good For You, or the other books written by the authors.

https://en.wikipedia.org/wiki/Trust_Us,_We%27re_Experts

https://www.prwatch.org/tsigfy.html

a day ago

bryan0

Please explain how this tactic relates here. In this case we have the AI companies saying this technology is potentially very harmful, in fact existential. This seems the complete opposite of what big tobacco did.

What I meant by

> People seem unable to make up their mind if AI is very dangerous or is it not.

Is that the article says 2 contradictory things:

1. AI companies are misleading us when they say their tech is dangerous and people should be afraid.

2. AI is currently very dangerous and people should be afraid.

Anecdotally, people on the internet (including HN), seem unable to agree on whether AI is real or overblown "hype".

a day ago

dodu_

>So maybe instead of demonizing the AI companies, we have a nuanced debate about this tech and propose solutions that our best for our society?

These are not mutually exclusive.

Calling out the demonic behavior of trying to coerce people into using your product out of fear is not an indictment of the underlying technology itself.

a day ago

bryan0

One of the points I was trying to make is that the statement:

> trying to coerce people into using your product out of fear

is nonsense.

Everyone agrees that there are legitimate reasons to be fearful of this technology, this is not a fabrication, but we need to figure out how to proceed in a safe and constructive way.

What "coercion" is occurring here? Either you find the technology valuable and you want to pay for it, or you find it not useful (or worse harmful), and you do not want to pay for it.

Maybe another way of putting it, what do you think the frontier AI companies should do in this situation? It seems that being straightforward with the dangers is correct thing to do, and probably being overly cautious is prudent. You could go further and argue they should slow down or stop development, but that is something that the govt should impose, we should not expect or trust the companies to do this themselves. Ironically, in the Anthropic / Pentagon case, we have Anthropic trying to pump the brakes and put up guardrails while the govt wants to go full-steam ahead with autonomous warfare.

The other issue with slowing down / pausing development is it requires an unheard of level of agreement, even with companies in China, or else it will probably not be effective. You could argue this is not even possible at this point.

a day ago

autoexec

> People seem unable to make up their mind if AI very dangerous is it not.

Pretty much everyone agrees that what passes for AI these days is very dangerous. People only differ in which ways they think it is (or will be) dangerous and which dangers they are most worried about.

Some are worried about the environmental harms. Some are worried that AI will do a very shitty job of doing very important things, but that companies will use it anyway because it saves them money and we'll suffer for it. Some are worried that AI will take their jobs regardless of how well that AI performs. Some are worried that AI will make their jobs suck. You've also got people who think that our glorified chatbots are going to gain consciousness and become literal gods who will take over the planet and usher in the Robot Wars.

Some of those dangers are clearly more immediate and realistic than others. We should probably be focused on those right now. We can start by limiting the environmental harms they're causing and making companies responsible for the costs and impacts they have on our environment. Maybe make it illegal for power companies to raise the price of power for individuals just because some company wants to build a bunch of power hungry data centers. Let those companies fully bear the costs instead.

We can make sure that anyone using AI for any reason cannot use AI as a defense for the harms their use of AI causes. If a company uses AI to make hiring decisions and the result is discrimination, an actual human at that company gets held legally accountable for that. If AI hallucinates a sale price, the company must honor that price. If AI misidentifies a suspect and an innocent person ends up behind bars a human gets held accountable.

We can ban the use of AI for things like autonomous weapons. Things that are too important to trust to unreliable AI.

We could even do more extreme things like improve our social safety nets so that if people are put out of work they don't become homeless, or invest more in the creation of AI individuals can host locally so we aren't forced to hand so much power to a few huge companies, or even force companies to release their models or their training data (which they mostly stole anyway) so that power doesn't consolidate into a small number of companies or individuals. We have lots of options, it just comes down to what we want and how much we can get our elected officials to represent our interests over the interests of the companies who are stuffing their pockets with cash.

a day ago

tangotaylor

Finally the media is catching on.

Lee Vinsel's criti-hype article nailed this 5 years ago, before we even had the chatbot economy we do now: https://sts-news.medium.com/youre-doing-it-wrong-notes-on-cr...

a day ago

b65e8bee43c2ed0

the media is not catching on, they've been looping through 'AI is going to kill us all!' when they want to sell fear and 'Look at all the energy and water AI companies are pointlessly wasting!' when they want to sell anger.

the writers and the editors know exactly what they're doing - spreading FUD and creating controversy out of thin air. some of it is done for-profit, some for-agenda, and all of it with malicious intent.

a day ago

InputName

In lieu of a technological moat, companies search for regulatory capture.

a day ago

DalasNoin

Quote from the article: ""AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies," Altman said in 2015."

Altman wasn't even at OpenAI at that point, so why would that be marketing?

a day ago

phainopepla2

> "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies," Altman said in 2015.

Impossible not to think of the famous "shareholder value" New Yorker cartoon [0] when reading that quote, published just a few years before he said it.

[0] https://www.newyorker.com/cartoon/a16995

a day ago

baggachipz

Why wouldn't they continue crying wolf when it always gets them free advertising from a gullible/complicit press?

a day ago

netcan

I remember thinking Altman seemed to be over-reacting and fanning the flames about "Ai bias" circa gpt3.

There was a when little panic about the fear of bigoted computers at that point.

But... it got a lot of earned advertising and they also sort of did a "pre-burn." They saturating the space with "bigoted Ai concern" for a while, and now I don't ever see it come up.

There's a "get ahead of the inevitable" thing going on. Also, obviously, prospectus hype.

Besides all that, these are geeks and they're excited. This is what an excited geek looks like.

a day ago

afh1

They want regulation for others but not them. Otherwise there might be competition.

a day ago

elondaits

I think they want regulation for them as well, because they have the money to comply… but regulation eliminates the threat of open source models, foreign models, and small independent companies.

a day ago

FiberBundle

Another potential reason, not mentioned in the article, is that open source models obviously pose the biggest threat in the labs' ability to monetize their tech. Anthropic especially seems to be very anti open-source. If frontier models start to plateau and don't have capabilities that truly differentiate them, nobody will pay what the labs would want to charge. Posing the tech as a danger is a way for them to make the government regulate open source models.

a day ago

jrumbut

This is a great point. I'm kind of surprised there isn't a greater proliferation of open source models to do things the public ones won't. I know such things exist, but imagine how many web browsers there would be if all the mainstream ones had the same content restrictions as LLMs.

I guess since training them does take cash that raises the bar for what people will do as a prank or on principle.

a day ago

adrian_b

Training is time-consuming and/or expensive but it is not the main blocker.

The main problem is obtaining a big enough training data set. Now, unless you are someone like Google or Microsoft, it has become much harder to scrap data from the Internet than by the time when OpenAI and Anthropic got most of their data.

a day ago

skybrian

It’s an extraordinary situation and I’m wondering what sort of analogies make sense.

If there were tobacco companies warning everyone who would listen in the 1950’s that cigarettes cause cancer, it would be, like, points for honesty, but why don’t you stop selling them then?

The difference being that there are a lot of good uses for AI chat and it doesn’t directly harm most people.

It seems like the customers who would misuse AI are getting left out of the discussion? It’s as if arms dealers were being solely blamed for war, or if arms dealers were expected to stop wars.

The difference being that a single, general purpose product that can do such a wide variety of things isn’t really comparable to making weapons that are only good for one thing.

Maybe it’s as if car manufacturers in the early 20th century were predicting highways, traffic, and pollution.

Or imagine if early dot-com companies were predicting the various dangers of social networks?

a day ago

dimva

They say AI will destroy humanity because they believe it. OpenAI and Anthropic were created by people who believed this. There's nothing nefarious about them saying this.

Why are they still building it? Because each team thinks that THEY are the ones who can prevent it from destroying humanity, but they have to get to AGI first, before the other teams make an AI that does destroy humanity.

But also, if AGI doesn't destroy humanity, it would be the most powerful weapon in the world, and they want to be the ones in control of it. Keeping the focus on Armageddon distracts from the real and severe problems that arise if a single person, or even a small group, controls an AGI.

a day ago

twobitshifter

I’m afraid of AI but not because I think it’s going to become skynet tomorrow, it’s because of all the social ills that are already clearly attached to it.

- Spam

- Deep Fakes

- Porn

- Buggy Software

- Economic Bubbles

- Degradation in people’s abilities and learned dependence on ChatGPT for basic functions.

- Job loss through enshittification ala AI interviews and Telemarketers

- Climate Change, noise pollution etc.

- Mass Surveillance

It’s much more an Idiocracy AI than Terminator.

a day ago

pacomerh

Yeah, this is definitely not sustainable. We're all getting tired of the content quality going downhill. If it's gonna to be like this for a while, I guess new social networks will have to emerge and moderate more? maybe, especially since the government definitely isn't interested in moderating anything. They just want to win races.

a day ago

devilsdata

How do you ever have a social media network that is immune to this from now? You could with the best intentions start a non-profit, defederated, open-source, grass roots social network. It would go great, until the moment it hits critical mass and becomes prey for people who are willing to piss in the pond to make money.

There's no way to defend against it. You can just copy and paste text from an LLM into the reply box.

a day ago

__natty__

This. We don’t need AI searching for critical bugs in the software, enshittified life quality is already enough.

a day ago

mrwh

What happens when unemployment hits 25%, and youth unemployment hits 50%, in a democracy? That's the real terror here, not hacking.

a day ago

gdiamos

I originally thought evil killer robots discussions in AI labs was an idea out of Hollywood.

Then I saw how effective it was at raising money.

a day ago

andai

Beating the drum of utopia and apocalypse was a suspiciously common tactic in the last century. Also, in a slightly different way, the twenty before that.

a day ago

gdiamos

I used to think evil killer robot discussions among AI researchers was an idea based in Hollywood, not science.

Then I realized how effective the fear was at fundraising...

a day ago

sixtyj

Article mentions a book "The AI Con" that argues that much of what is labeled "artificial intelligence" is a misleading term that obscures ordinary automation while concentrating power in a small number of technology firms.

So fear-mongering seems to be just a tool how to get attention and more customers.

Hey ma, I use very dangerous tool now. I am OG.

a day ago

GuB-42

For me, there are two reasons:

One is simple marketing. "We have a product that gives you superpowers, but we have to be careful, imagine if everyone had superpowers, it would be chaos". Most people reaction would be "I don't care about the others, give me these superpowers". An then, when you finally have it, you realize it is just not that super, but the next one will be, or so they say.

The other is that established players want to build a moat. If by scare tactics, they can convince regulators to not give newcomers the same freedom they had when they started up, good for them. Bonus points if they are the ones who make the rules. They are they experts, we should follow them, right?

And yes, it is touched on in the article, it is a way to hide smaller but real problems behind bigger but hypothetical problems. When we see Skynet, we don't see the copyright problems, slop and unreliability. While they talk about the vulnerabilities the new LLM find, they don't talk about the ones it introduces in your vibe-coded app.

a day ago

throwaway132448

The same reason Palantir does: Its their brand - it’s just marketing.

Glad people are finally catching on.

a day ago

SpicyLemonZest

> It's a strange way for any company to talk about its own work. You don't hear McDonald's announcing that it's created a burger so terrifyingly delicious that it would be unethical to grill it for the public.

> Here's one theory.

But the author never gets back to this! It's the main observation the theory has to account for; why don't we see other companies speak this way, if it's such an effective strategy for deflecting non-apocalyptic concerns?

a day ago

tangotaylor

I think they get away with it because it's a dual-use technology. They have this tool that could end the world and people want in on it because they want the power.

The answer to the burger analogy is that it's the wrong analogy. McDonald's is selling you the burger. AI companies are essentially selling you the grill.

The hype works so well because it plays on people's ego and desire for power. They think I have the power to end the world with this technology but I won't because I'm a good person.

a day ago

autoexec

> why don't we see other companies speak this way

They do. Every company who promised us that their shitty cell phone app or website was going to change the world and revolutionize and disrupt industry/society was guilty of the same thing. They just usually focused their ridiculous levels of hype on the positives. The goal was the same. "Our technology is going to change the world so investors had better give us cash or else they will be left behind" is still the message.

I think this is just an advancement of what we saw with self-driving cars and how companies were pushing narratives around how every trucker will be out of work (this still hasn't happened) or how no individuals would own a car again while deflecting from things like how badly their cars performed in snow/rain or in anything other than very carefully controlled and mapped out conditions.

a day ago

SpicyLemonZest

There's no past tense "saw", self-driving cars are still a thing! Waymo announced that they're expanding to Portland yesterday (https://waymo.com/blog/shorts/waymo-in-portland/), and the announcement does not include anything but sunshine and roses. Even within the AI space, I really don't see anyone other than frontier AI labs talking about their product this way.

a day ago

gdulli

If McDonald's food was featured in sci fi movies about being able to end humanity through war, that's when this would apply and they'd cultivate fear of that nonsense to distract from their food being shitty and overpriced and unhealthy.

a day ago

scratchyone

tbf most companies don't have a potentially world-ending product. only real similar field is defense contractors who typically can't brag about unreleased ideas as they're classified.

a day ago

SpicyLemonZest

I agree, but the experts the author cites do not. Professor Valor believes that AI is a mirror and any existential fears of it are just reflected fear of ourselves; Professor Bender believes that AI is a con and all the people who say it's powerful enough to be world-ending are lying. Anyone who concedes the premise that AI has a genuine potential to be world-ending is, I think, on the AI labs' side of this debate.

a day ago

jrumbut

We are not such a bad thing to fear.

This technology interacts socially, so even if it can't jailbreak itself on a technology-level (which feels like a tough guarantee to make at this point) it can simply ask someone to do a bad thing and there is some chance they'll do it. The same way a human leader does.

The first kids who have only faint memories of a time before chatbots will be entering the military in 6-7 years. You have to assume they are acting as best friends, therapists, or even surrogate parents for a substantial number of kids right now.

We are going to need years to figure out what to do about this technology. I think some impetus to get that process started is a good idea.

a day ago

api

I’d say the same thing about Palantir. It’s very clear that they are playing into the hatred and speculation about them to puff themselves up and get attention in the “any attention is good attention” era. Being a literal comic book villain syndicate is sexier than being Millennial/GenZ TRW.

(I am not saying I approve of all the stuff they are being used for or all the statements of its management.)

a day ago

7777777phil

If your model converges to the same outputs as everyone else's because everyone trained on the same data, the only thing left to differentiate on is brand, and fear is great at building brand.

"We are too dangerous to commoditize" pitches better than "we are mostly typical of the internet's median answer", those are kind of the same statement.

a day ago

scratchyone

Honestly we should have learned this claim from AI companies was purely fear-mongering back when GPT-2 was "too dangerous to release".

a day ago

mofeien

Given that his reason for saying GPT-2 was too dangerous to release was that the world needed more time to prepare for the effects of this technology, and given that the following models were basically scaled-up versions of it and killed social media, news reporting and other kinds of communication, I'd say he was right about the dangers of it.

a day ago

scratchyone

funny how he didn't care about ethics the moment it was more profitable to release it than to talk about dangers.

a day ago

detectivestory

That's true but in reality I think people are far more afraid of AI in terms of how it is being used in warfare and policing. Automatic target detection and deployment of drones, or even how it might simply make their role at work redundant etc

a day ago

mossTechnician

To me, the more interesting divergence in discussion is on its capabilities.

AI industry insiders (including "safety" groups like ControlAI) talk about the dangers only in terms of its power: "Scheming", job loss, breaking containment, the New Cold War with China.

Critics outside the industry talk in terms of its lack of power: Inaccuracy, erroneous translation of user intent, failure to deliver on its promises and investment, environmental cost from the former, and ultimately the danger of people in power (e.g. law enforcement, military officials) treating its output as valid and unbiased, or simply laundering their wishes through it.

a day ago

scratchyone

100% agreed. That's part of the issue imo, these companies pretend their new models are "too dangerous" to seem like they care about the world, yet they have no qualms deploying existing models in warfare or bragging about impending mass-unemployment.

a day ago

palmotea

> That's true but in reality I think people are far more afraid of AI in terms of how it is being used in warfare and policing. Automatic target detection and deployment of drones, or even how it might simply make their role at work redundant etc

I think the last one should be first on the list: regular people are afraid AI will negatively affect their economic security (i.e. knowledge and service workers will get the rust-belt factory worker treatment).

And the potential of giving knowledge and service workers the rust-belt factory worker treatment is exactly what makes Wall Street excited about AI and has the AI company leaders salivating about the profit they can make.

Warfare, policing, bio-engineered viruses are theoretical and far down the list.

a day ago

wongarsu

Not to mention that "automatic target detection" was primarily enabled by the ~2016-2020 AI hype/boom around image recognition, not the 2022-current hype/boom around LLMs

a day ago

detectivestory

Its already being used in warfare though.

a day ago

palmotea

> Its already being used in warfare though.

What I mean is theoretical to the common person. They don't have killbot drones hunting them down, and are unlikely to have that experience anytime soon.

But most people have jobs, most people would be hard-hit if they lost theirs, lots of people lose theirs, and our elites are just itching to make that happen.

I'm certainly most worried about AI: my employer started an ongoing silent layoff campaign about the same time they started enforcing AI usage. I don't think those are unconnected.

a day ago

notrealyme123

I am to be honest not sure what I am more scared.

AI shaping warfare Vs. Using AI to justify outrageous warfare

a day ago

MSFT_Edging

We sadly don't need AI to justify outrageous warfare. You just need to remember when the US invaded Iraq over WMDs, including a full investigation into the WMDs that never found any. We then invaded anyway, to the detriment of everyone except defense contractors.

a day ago

scratchyone

Don't worry, these companies will make sure we get to experience both nightmare futures.

a day ago

yieldcrv

that’s not a war crime, that’s boundary setting, and honestly, that’s rare

would you like me to list the applicable sections of the Geneva convention?

a day ago

chasd00

AI has been used in defense for a while now, a modern tomahawk cruise missile and its associated targeting systems is a good example. I think most people fear AI taking their job and only source of income.

a day ago

sublinear

Linear regression?

a day ago

sublinear

These were all already very valid concerns long before this era of "AI" or computational power.

The broader public is just now barely beginning to understand because all they have to do is ask a chatbot. AI does not enable new capabilities, but it does aggregate an idea into a rough sketch and do it quickly on-demand.

None of this really means it will play out that way. The devil is in the details. What it does mean is much more nuanced attention on the politics and money because that's where the power always was.

a day ago

detectivestory

AI does enable new capabilities when it comes to constant mass surveillance, and automated weaponry.

a day ago

sublinear

No it doesn't. We already have all of that right now and have had it for decades.

The big investment into Project Stargate is all about managing risk. The government contractor and security clearance situation is out of control. As well, every human mistake is costly and time consuming to address. If you instead blame it on AI, you can skip the court proceedings and postmortems.

The other part of this is likely an attempt to surface information with summaries and shorten the chain of command. This is just a power grab and a dangerous dismissal of necessary implementation detail. It's a tantrum being thrown by ignorant people at the top being displaced. We live in an ever-complicated world that demands more experienced leadership than we have available. AI is their hail mary pass.

LLMs are being abused as a political battering ram. They are not the technological breakthrough advertised. The AI label is borderline absurd. AGI even moreso. NLP is an accessibility tool at best.

a day ago

maplethorpe

It seems like they were correct, to me.

a day ago

elar_verole

Yes, I love how everyone uses this argument, when what they were saying was among the lines of "GPT-2 would make it too easy to generate spam, deepfakes, content to manipulate opinion..." (not the actual quote but something like that). Turns out it was completely correct if you look at the state of the internet right now.

Obviously, they still overhype and oversell this end of humanity stuff, but this argument regurgitated ad-nauseam is not THAT great of an example when you think about it.

a day ago

iugtmkbdfil834

I was going to say.. I think people in general have this weird understanding of the word dangerous. Just because something is not movie level dramatic and/or does not generate over the top violence does not automatically make it less dangerous. In a sense, just the fact that is benign on the surface and allowed to embed in our day to day life is what makes the upcoming rug pull so painful.

And I am saying this as a person who actually likes this tech.

a day ago

dicksent

gotta bring up the hype

a day ago

registeredcorn

Are you telling me that it's sexier to say, "In its current form, we cannot contain its power" rather than, "We're working out the last set of bugs before the start of Q3"?

a day ago

Sol-

You can gauge the quality of the article by seeing Emily Bender quoted, who will insist on stochastic parrots when AI does billions of dollars of economically useful work.

a day ago

ethin

Can you back this up with actual data, or is this "I believe it to be true" vibes?

a day ago

therobots927

Better be a lot of billions.

There’s about $1 trillion that needs to be paid off.

a day ago

raincole

What does this even mean? AI can be stochastic parrots and create billions of dollars of revenue at the same time.

Steam machines are even dumber, but I'm quite sure that industrial revolution is a real thing.

a day ago

nyc_data_geek1

Because your fear is their marketing, is their valuation. There, saved you a click.

a day ago

throwaway911282

dario is the biggest proponent of fear mongering marketing playbook

a day ago

feverzsj

That's exactly how religion works.

a day ago

p0w3n3d

FUS - Fear Uncertainty Doubt

a day ago

yuhmahp

Assuming this article isn't written by an AI

a day ago

scratchyone

I mean, it's the BBC and the article doesn't have any typical AI tells, where is this idea that it's AI written coming from???

a day ago

yuhmahp

I'm assuming it's possible that an AI can deduct the potential 3rd degree result from an article like this on the BBC. Not talking about the wording

a day ago

MajorTakeaway

>sees any random article headline, automatically assumes it's AI.

a day ago

hxugufjfjf

Pretty reasonable assumption these days unfortunately.

a day ago

Jtarii

Just saying things with no evidence is not reasonable

a day ago

philipwhiuk

Roko's basilisk is a very tedious idea.

a day ago