If AI scaling is to be shut down, let it be for a coherent reason

229 points
1/20/1970
a year ago
by nsoonhui

Comments


skocznymroczny

Am I the only one who's not very concerned about ChatGPT and "AI" in general? I am young, but still lived through several hype phases. I remember when 3D TVs were going to be mainstream and 2D was considered legacy. I remember when PC was to die soon and to be replaced by smartphones. I remember when VR was to become as common accessory as a game controller. It's 2023, and I still don't have automated self driving car that can get me to work. At work I am still using the boring old keyboard and monitor. I am not using a VR headset to connect myself to shared office space inside of a metaverse. Oh and I don't have to call my hairdresser for an appointment, because my phone will use artificial intelligence to do that for me (remember that? yeah it was 5 years already, where's my magical AI tech).

I played with technologies like Stable Diffusion for a while. They are fun to use for a while, there are too many unsolved issues such as coherent style, stable style transfer for videos an despite my best effort every second image will have a human character with two heads or four arms.

I feel like ChatGPT is similar. It makes for a fun parlor trick, but when it gets things wrong, it gets them very wrong and it doesn't let you easily know that it's wrong. People are already plugging ChatGPT to anything from code to managing investments, it's just a matter of time until it crashes and burns. We are just waiting for the first ChatGPT version of "autonomous car rams pedestrian".

As for OpenAI, it's in their best interest for people to be scared and governments to propose regulations. It further solidifies ChatGPT as a force to be reckoned with, even if it isn't. They're trying to sell it as AGI even though it isn't anywhere near, but actions like this are helping to maintain that image.

a year ago

jacquesm

> I am young, but still lived through several hype phases.

My impression is that this is a tech revolution unlike any that has gone before allowing for absolutely massive concentration of power and wealth. The previous tech revolutions have shown the downsides of such concentration and I'm quite sure that the people that are currently in the driving seat of how, when and where this tech gets used and/or weaponized are exactly the people that I would not want to see in that position.

The problem is not with the fact that the pig is dancing in a crappy way. The problem is that it is dancing at all, this is the thing that makes this a different kind of tech revolution. So far tech was doing what it was told, within certain boundaries. This is the kind of tech that will end up telling you what to do, either directly or through some intermediary, it is a substantial shift in the balance of power and has the potential to lead to even stronger divisions in our populations than what social media has been able to effect as well as to cause massive loss of jobs.

Extrapolating from the last two years of this development into the future over the span of say a decade (a reasonably short span of time for me) means that 'all bets are off' and that we can no longer meaningfully predict the outcome of the moves that are being made today. For me that is a warning to be cautious to get more understanding of how we will deal with the problems that it will generate rather than to jump in head first to see where it will lead.

a year ago

evrydayhustling

I'm surprised by your impression that there are people "currently in the driving seat of how, when and where this tech gets used". I think we are in the grip of a wonderful, highly creative dis-order.

The most comparable moment in my lifetime was the late 90s, where the inevitability and transformative power of the Internet (through the web) became mainstream over a couple of years. This time that transition seems to be taking weeks! And yet, it is FAR more broadly accessible than the Web was in the late 90s. In America at least, where 85% of individuals have smart phones, ChatGPT is instantly accessible to a huge portion of the population. And unlike other advances it doesn't require specialized expertise to begin extracting value.

Meanwhile, LLM owners are being compelled by competition to continue offering services for free and release research into the public domain. The engineering advances that power their platforms face a sideways challenge from research like LORA that makes them less relevant. And because the training data and methods are ubiquitous, public institutions of many kinds can potentially build their own LLMs if rents get too high. Outside the runaway-superintelligence scenario, the current accessibility of LLMs is one of the best ways this could have played out.

I'm afraid those attempting to slow research will be unwitting accomplices for people who use the opportunity to reduce competition and consolidate power over the new technology.

a year ago

lukev

I agree with this 100%, while also disagreeing with the "robot god will kill us all" objections to AI which unfortunately tend to get most of the mindshare.

I think it's important to realize that these are two _completely separate_ concerns. Unfortunately, a lot of the people who get the most air time on this topic are not at all worried about authoritarianism or economic collapse compared to a hypothetical singularity.

a year ago

JohnFen

Personally, I'm worried that if the proponents of LLMs are correct, it will directly lead to authoritarianism and economic collapse.

a year ago

bvaisvil

Who's gonna pay for GPT's in an economic collapse?

a year ago

slowmovintarget

You're presupposing the collapse touches the rich.

"A quart of wheat for a denarius, and three quarts of barley for a denarius; but do not harm the oil and the wine!"

a year ago

ChatGTP

So how does this situation actually look:

Bill Gates sitting on a yacht controlling a robot army while we're all starving ?

a year ago

JohnFen

The wealthy and powerful. Same as now.

a year ago

Avicebron

[dead]

a year ago

dTal

I wish this viewpoint were more common. It's frightening to see how rapidly the "AI safety / alignment" discourse is being co-opted into arguments that AI should follow the whims of large centralized corporations. We have no idea what secret instructions OpenAI or the others are giving their "helpful" assistants. I find the notion that AI will spontaneously become a paperclip maximizer much less immediately terrifying than the clear and present danger of it being co-opted by our existing soulless paperclip maximizers, corporations, to devastating effect.

a year ago

wrycoder

Yeah, the LLMs at the three letter agencies communicating directly with their LLM counterparts at FB and Google. And Twitter, once Musk moves on, and that site gets brought back into the fold.

The social issues need to be addressed now.

a year ago

michaelmior

> AI should follow the whims of large centralized corporations

I'm not arguing that AI should follow the whims of large centralized corporations, but given the cost of training large models such as GPT-4, what's the alternative?

Do we need large language models as a taxpayer-funded public utility? Perhaps a non-profit foundation?

I'm not sure what the solution is here, but I am concerned that right now, large corporations may be the only ones capable of training such models.

a year ago

haberman

> So far tech was doing what it was told, within certain boundaries. This is the kind of tech that will end up telling you what to do, either directly or through some intermediary

The financial crisis in 2008 was caused by investors who made risky bets based on models they built telling them that mortgage-backed securities were a safe investment. Our understanding of the world is guided by social science that seems to involve an incredible amount of p-hacking using statistics packages that make it easy to crunch through big datasets looking for something publication-worthy. It seems like tech already gives people plenty of tools to make poor decisions that hurt everybody.

a year ago

jacquesm

Indeed it does, and we haven't got that under control yet, not by a very long distance. So I figure better take it easy before adding an even more powerful tool to the mix.

a year ago

Ygg2

> Extrapolating from the last two years

Therein lies the error. People forget reality is finite and just because it improved now doesn't mean it will continue indefinitely.

  An exponential curve is just a sigmoid curve in disguise. 
Most AI I've seen suffer from catastrophic errors in the tail end (famous example of two near identical cat pictures classified as cat and dog respectively).
a year ago

HDThoreaun

What are the odds scaling continues to lead to massive AI improvements? No one is saying 100%, you seem to be arguing that they are. If you're willing to put a confidence interval on the odds with evidence we can have an actual conversation about what the best course of action is, but just talking past each other with "it might continue scaling" "no it won't" doesn't seem particularly helpful.

I think the important thing here though is the difficulty in creating an accurate confidence interval that isn't [0-100]. We are truly in uncharted territory.

a year ago

Ygg2

> No one is saying 100%

Points at AI moratorium. I think people are arguing its inevitable.

Putting error bars on gut feeling. Interesting idea. I'd say in 10-20 years we'll not see anything revolutionary, as in AI smart enough to continue working on improving AI.

So in 10-20 years I don't expect fully self driving cars (Unsupervised, any terrain, better driver than 99.9% humans).

AI might see use in industry, but I doubt they will be unsupervised, unless we start living in Idiocracy and decide highly risque tech is better than average person.

a year ago

barking_biscuit

>I'd say in 10-20 years we'll not see anything revolutionary, as in AI smart enough to continue working on improving AI

You do realize we've just at least doubled the amount of cognitive bandwidth on earth right? For every one brain, there are now two. A step-wise change in bandwidth is most definitely going to have some very interesting implications and probably much sooner than 10 years.

a year ago

Ygg2

> You do realize we've just at least doubled the amount of cognitive bandwidth on earth right?

What you mean? Human population is now past exponential curve and entering the saturation point.

Or do you mean adding ChatGPT? Then it's not doubled. Pretty sure that's centralized.

a year ago

barking_biscuit

> Or do you mean adding ChatGPT? Then it's not doubled. Pretty sure that's centralized.

Is it though? Think about it. If I have a medical or legal conversation with it that I derive value from, that's akin to a fully trained professional human who took several decades to grow into an adult and then undergo training just popping into existence in seconds during inference and then just as quickly disappearing. Just a few short months ago, the only way I could have had that conversation was to monopolize the cognitive bandwidth of a real human being who then couldn't utilize it to do other things.

So, when I say doubled, what I mean is that in theory every person who can access a sufficiently powerful LLM can effectively create cognitive bandwidth out of thin air every time they use the model to do inference.

Look what happened every time there has been a stepwise change in bandwidth throughout history. Printing press, telephone, dial up internet, broadband, fiber optic. The system becomes capable of vastly more throughput and latencies are decreased significantly. The same thing is happening here. This is a revolution.

a year ago

tome

> You do realize we've just at least doubled the amount of cognitive bandwidth on earth right?

I don't realize that. What do you mean exactly?

a year ago

barking_biscuit

LLMs can perform cognitive labor. Moreover they can increasingly perform it at a level that matches and/or surpasses humans. Humans can use them to augment their own capabilities, and execute faster on more difficult tasks with a higher rate of success. In addition, cognitive labor can be automated.

When bandwidth increases, latency decreases. We're going to be able to get significantly higher throughput than we were previously able to. This is already happening.

a year ago

ChatGTP

Nope, it's now another voice, a colleague who's work you constantly have to check over because you can't trust it enough to just say, "go ahead, I know you're trust worthy".

This is already happening. ??

a year ago

barking_biscuit

Sort of just proves my point, no? It's faster for me to just check its work than to do the work from scratch myself and that work isn't monopolizing the wetware of another human being. Cognitive bandwidth has increased. The system is capable of more throughput than before. In fact, how much more throughput could you get if you employ enough instances of LLMs to saturate the cognitive bandwidth of a human who's sole job is to simply verify the outputs?

If you look at the leap from GPT-3 -> GPT-4 in terms of hallucinations and capabilities etc, and you combine that with advances in cognitive architecture like reflexion and AutoGPT, it's pretty clear the trajectory is one of becoming increasingly competent and trustworthy.

The degree to which you need to check it's work depends on your use case, level of risk tolerance, and methods for verification. I think one of the reasons AI art has absolutely exploded is because there's no consequences for a generation that fails and it can be verified instantly. Compare that to doing your taxes where it's high stakes if you get it wrong, you're far less likely to rely on it. There is a landscape of usefulness with different peaks and valleys.

a year ago

ChatGTP

What are one of the professional use cases where you would just feel comfortable YOLOing some ChatGPT generated code into prod? Publishing a journal without verification etc?

You should also take note of the warnings in the GPT-4 manual, it's a much more convincing liar than GPT-3. Quite explicitly says that.

My fear is that I just get lazy and trust it all the time.

I think one of the reasons AI art has absolutely exploded is because there's no consequences for a generation that fails and it can be verified instantly.

What are you talking about exactly?

a year ago

barking_biscuit

What's with the assumption that anyone needs to YOLO anything? Your coworkers don't let you YOLO your code to prod, and you don't let them YOLO their code to prod. Trust but verify, right?

My point with the AI art comment is that not every output of these models is something that needs to go to production! There's a continuum of how much something matters if it's wrong, and it depends on who is consuming the output and what it is they need to do with it, and the degree to which other stakeholders are involved.

a year ago

HDThoreaun

Not at all. They're saying the current probability is high enough to warrant a cease research agreement because the risks outweigh the rewards.

a year ago

jabradoodle

Your definition of what would be revolutionary is likely the last thing humans will achieve, there is a lot of revolutionary things to happen between here and there.

I'm not sure what you are using as a definition of AI but I would say it is already being used massively in industry, and a lot of harm can be done even if it isn't autonomous.

a year ago

jacquesm

Why is that an error? This is very new tech, if anything the rate of change is accelerating and nowhere near slowing down. That it will flatten at some point goes without saying (you'd hope!), but for the time being it looks like we're on the increasing first derivative part of the sigmoid, not on the flattening part, and from two years to a decade takes you roughly from '2 kg luggable' to 'iphone' and that was a sea change.

a year ago

Ygg2

> Why is that an error?

First. It seems like these AI models depend on the underlying hardware acceleration doubling, which is not really the case anymore.

Second. All AI's I've seen suffer from the same works fine until it just flips the fuck out behavior (and starts hallucinating). You wouldn't tollerate a programmer that worked fine except he would ocassionally come to work high enough on bath salts to start claiming the sky is red and aliens have inflitrated the Wall of China. AI's that don't suffer from this aren't general purpose.

Third. I'm not convinced in we'll make AI whose job will be to make smarter AI which will make smarter AI argument. A smart enough AI could just rewire its reward mechanism to get reward without work (or collude with other AIs meant to monitor it to just do nothing).

a year ago

zirgs

I bought my GPU back in 2021. I had the same computing power as I have now, but back then it could only generate crappy pictures. AI image generation has improved massively in a few months without increasing hw requirements.

a year ago

dwaltrip

Why do you believe there will be no significant improvements beyond SOTA in the coming years / decades?

That's an incredibly strong stance...

I’d love to hear your assessment of the improvements from gpt-3.5 to gpt-4. Do you not think it is a large jump?

a year ago

lkbm

I think the fact that LLMs are basically fixed models (plus some RLHF and a tiny context window) means they won't likely foom in their current form, but we're trying to change that. Meanwhile, a whole bunch of ML researchers are saying "hey, our field is getting dangerous and we don't know how to keep it safe".

I'm verrrry skeptical of governmental regulation here, but I'm also not willing to dismiss the experts shouting "our work is might quickly lead to disaster!" AI is advancing very rapidly.

Yes, people were wrong about 3D TVs, but they were also wrong about the growth of solar power in the other direction, repeatedly under-estimating its rate of improvement/adoption year after year[0]. It'd consider that a much better comparison: it's not a single "3D will replace 2D", but "solar power will rapidly iterate to be better and cheaper faster than expected". Well, AI is rapidly iterating to be better and cheaper faster than expected. (To be fair, only some AI; LLMs and image generation.)

> People are already plugging ChatGPT to anything from code to managing investments, it's just a matter of time until it crashes and burns

It's not X-risk, but it's worth asking whether just ChatGPT crashes and burns, or also the market it's being plugged into.

> As for OpenAI, it's in their best interest for people to be scared and governments to propose regulations. It further solidifies ChatGPT as a force to be reckoned with, even if it isn't. They're trying to sell it as AGI even though it isn't anywhere near, but actions like this are helping to maintain that image.

No one (or very few) think ChatGPT is an AGI, and anyone who expresses high confidence about how close it is, regardless of whether they say "very close" or "very far" is over-confident. There's widespread disagreement among the people best positioned to know. We don't know, and that's bad.

[0] https://www.vox.com/2015/10/12/9510879/iea-underestimate-ren...

a year ago

gspencley

You don't even have to look at other "fads" like 3D televisions and VR to be skeptical of the claim that recent advancements in ML* represent a major technological revolution that will change society like the Internet or the Printing Press did.

Just look at ML's own history. A few years ago we had the "deepfake" scare. People were terrified that now videos are going to surface of politicians saying and doing things that they did not and we would have no way to tell if it was AI generated or not. But we had already been dealing with this for decades with doctored images a la Photoshop. Videos could already be edited by human beings to make it look like someone was communicating a message that they were not.

What we have now is a tool that is able to generate text-based content that is indistinguishable from that written by an 8th grader. Even with inevitable improvements that get us to 12th grade level, so what?

People will use ChatGPT to write emails and code for them. Personally I don't see the appeal since I am a very creative person and don't want to outsource that work to a human let alone software, but who cares? Currently I can tell when people that I know well are using ChatGPT to "refine" their writing to me, but the content is what matters anyway and I don't know anyone who says they don't "massage" the ChatGPT output for correctness first.

Certain people will find creative uses like generating SEO content for websites etc. That's a problem for the search engines. Basically the Internet is about to get noisier .. but it was already so noisy that the older I get the less attention I'm starting to pay to it, in general, anyway.

Then again, I am limited by my own imagination. Maybe people will come up with truly "disruptive" ways to use LLMs ... but meh. Same shit different day IMO.

* and let's be honest here, ML is a MUCH more accurate term for what we have than AI ... though it's not as good of a marketing term since ML doesn't put "The Matrix" or "The Terminator" in the minds of the lay public like "AI" does.

a year ago

slowmovintarget

This is not a fad. This is like the advent of writing.

Plato opined that people would lose their facilities for memorization and the passing of oral histories would falter and die. He was correct about that. We don't have the same techniques and same facilities to memorize the Iliad verse for verse, epithet for epithet as the ancients did.

He was incorrect about that being as ruinous as he imagined, but it was as dramatic an impact on the human race as agriculture was to hunter-gatherer society.

I think we're at one of these societal delineations; before agriculture and after, before writing and after, before generative AI and after.

a year ago

ithkuil

I know it sounds silly, but despite all the positives, the invention of writing was indeed ruinous; after all, without writing we couldn't have invented AI which is going to be truly ruinous...

(Fast-forward N years) I know it sounds silly, but despite all the positives, the invention of AI was truly ruinous; without AI we couldn't have invented X which is going to be truly ruinous...

a year ago

riffraff

> We don't have the same techniques and same facilities to memorize the Iliad verse for verse, epithet for epithet as the ancients did.

it's rhymes and metric, and I'm pretty sure in ancient time people got repetitions wrong too, which is why we have stories with variations everywhere in the world. What we miss is dedication, time and a facility with the language.

There's plenty of Italian people who have memorized Dante's Divine Comedy, for example.

a year ago

l5ymep

Other fads have the disadvantage of being easily identifiable and avoidable. But AI chips away at what it means to be human. Now imagine if every comment in this thread is not a real person, but made by a LLM. Get any Truman show vibes?

a year ago

gspencley

> Get any Truman show vibes?

No.

It's an interesting thought experiment but it changes nothing. Not for me, anyway. Commenting on these threads is an activity that I do for my own entertainment and "mental flexing." If it turns out that I'm not talking to real people then it doesn't make much of a difference because I don't actually perceive these messages as anything other than pixels on a screen anyway.

I hope that doesn't sound "cold" but I come from a generation that was born before the Internet existed. I was about 10 years-old when we got our first modem-equipped computer and that was still early for most people (1992). Having those experiences early on with "early world-wide-web" meant that even though you knew you were talking to "real people" ... there was no real time chat, or video streaming or voice-over-ip etc. ... and so everyone's messages to each other were always pure text all of the time. And no one dared ever give their real name or identity online. You had no idea who you were talking to. So the web forced you to think of communication as just text between anonymous entities. I never got over that mental model, personally. Maybe a little bit with close friends and family on Facebook. But I'm not much of a social media user. When it comes to twitter and forums ... you all might as well be AI bots anyway. Makes no difference to me!

EDIT (addendum):

It's interesting, the more I think about your comment and my response the more I realize that it is THE INTERNET, still, that has fundamentally changed the nature of what it means to be human and to relate to others.

If you see interactions between people online as "real", meaningful human interactions, no different than relating to people in the physical / offline world, then it must be somewhat disturbing to think that you might be caught up in a "meaningful" relationship with someone who is "faking it." But that reminds me of romance scams.

For 18 years I ran a high traffic adult website and I would have men email me from time to time to share their stories about scammers luring them into relationships using doctored video, images and false identities etc. These men got completely wrapped up in the fantasy that they were being sold and it cost many of them their life savings before they finally realized it was a lie. I felt awful for them and quite sympathetic but at the same time wondered how lonely I would personally have to be to lose my skepticism of what I was being told if an attractive young woman were to express an interest in me out of nowhere.

ML will undoubtedly be used by scammers and ne'er-do-wells as a way to do their misdeeds more efficiently. But I think that the remedy is education. I miss the days when people were a bit weary of what they did, said or uploaded to the interwebs. I don't see the problem with trusting what you read online a little bit less.

a year ago

gspencley

> It matters a lot to me because the whole point of commenting here (or anywhere) is to talk to other humans, not just to talk to myself.

That's fair, but one of my points was that even prior to ChatGPT the ability existed for you to be "sucked into" a relationship with another under false pretenses. LLMs might make it easier to put this sort of thing on "autopilot", but if what you seek is a guarantee of talking to other humans then I don't see how, in a post-LLM-world, that can't be done. I have no doubt that online forums and communities will come up with methods to "prove" that people are "real" (though I fear this will hurt anonymity online a bit more), but also try going out and meeting people in the real world more.

It's funny, I've been an ultra tech savvy computer nerd since I was a little kid. I owe a lot to the Internet. I was working from home for 20 years before the pandemic, running my own business. Grocery delivery services have been a godsend for me, because I find grocery shopping to be one of the most stressful activities in life. But as I enter middle age I'm becoming less and less enthusiastic about tech and "online existence" in general. The number of things that I would miss if the Internet just went away entirely gets fewer and fewer every year. Working remotely and grocery delivery services are probably the only two things that I couldn't live without. Everything else ... meh. Maybe I'm just getting burned out on tech and hype trains ... but "talking to real people" is something I start to value doing offline more and more when social interaction is what I seek.

a year ago

JohnFen

> one of my points was that even prior to ChatGPT the ability existed for you to be "sucked into" a relationship with another under false pretenses

That's true, of course, but it's still interacting with a real human being. An adverse interaction, but at least a human.

> I don't see how, in a post-LLM-world, that can't be done.

I don't see how it can be done without losing much of the value of online interactions.

> also try going out and meeting people in the real world more.

I go out and meet real people plenty, thank you very much. But we're talking about online interactions here. There should be room for people online, too.

a year ago

gspencley

> That's true, of course, but it's still interacting with a real human being. An adverse interaction, but at least a human.

Actually, not entirely. Some of the stories that really made me raise an eyebrow were people who claimed that they were video-chatting with "the girl." An important piece of context is that these men reached out to me because they found pictures of the woman they believed they were in a relationship with on my website. They wanted to know if the woman was employed by me or if we could verify certain details about her to try and make sense of what they had gone through.

Of course there were people driving this interaction. But a video chat? Obviously it was faked. What I think that AI advancement is going to allow these scammers to do in the future is possibly have extremely convincing voice chats, because when I probed about these video chat claims often times the scammers would have excuses about the microphone not working etc. so they were clearly just feeding pre-recorded video.

Anyway I've gotten the sense by your reply that you are under the impression that we are having some sort of debate or argument. I'm just making conversation and sharing my point of view and experiences. In my opinion I'm not sure the Internet "should" be anything in particular.

a year ago

JohnFen

> Anyway I've gotten the sense by your reply that you are under the impression that we are having some sort of debate or argument. I'm just making conversation and sharing my point of view and experiences. In my opinion I'm not sure the Internet "should" be anything in particular.

Oh, no, I didn't think that at all. I'm sorry that I gave that impression. I'm just doing the same as you, sharing worldviews. I'm not trying to convince anyone of anything. Just having interesting conversation.

a year ago

JohnFen

> If it turns out that I'm not talking to real people then it doesn't make much of a difference because I don't actually perceive these messages as anything other than pixels on a screen anyway.

It matters a lot to me because the whole point of commenting here (or anywhere) is to talk to other humans, not just to talk to myself.

a year ago

chasd00

> ..whole point of commenting here (or anywhere) is to talk to other humans..

honestly, if i can't tell the difference between an AI and a human here then why does the difference matter? If every comment on this story was AI generated except for mine I still received the same insight, enjoyment, and hit of dopamine. I don't think I really care between communicating with an AI or human if i can't tell the difference.

a year ago

JohnFen

I understand that point of view. I simply don't share it. If I can't tell the difference between AI and a human being in my conversations, that would undermine my trust in some extremely important things. I'd withdraw from such fora entirely as a result, because there's no way for me to know if those conversations are real or just me engaging in mental masturbation.

a year ago

layer8

Some would say that if you can’t possibly tell the difference, then both are equally real or unreal, in the sense that it doesn’t matter if the neural net is biological or electronic.

a year ago

JohnFen

Right, that's why I say I understand what chasd00 is saying. I happen to disagree -- I think it matters quite a lot.

Even ignoring philosophical arguments, it matters to me on a personal level because I consider people to be important, and want to interact with them. If I'm talking to a machine that I can't tell isn't a human, then I'm accomplishing nothing of importance and am just wasting my time.

a year ago

pyinstallwoes

How do you know other humans aren’t machines?

a year ago

nuancebydefault

You should readcwhat you just wrote

> If it turns out that I'm not talking to real people then it doesn't make much of a difference because I don't actually perceive these messages as anything other than pixels on a screen anyway.

Sorry, but no normal person can say that. Suppose I would try to bull you, it wouldn't matter? It wouldn't make a difference whether a person would have typed it or not?

a year ago

G_z9

This comment is mind blowing

a year ago

Nasrudith

If AI can chip away at your meaning of being human I am afraid your definition and understanding before is in need of improvement. Destroying a faulty understanding should be celebrated, not feared.

a year ago

tome

> AI chips away at what it means to be human

Can you name a broad technological revolution that didn't?

a year ago

jabradoodle

You start by talking about ML yet your point only touches on LLM's. There is plenty of harm they can provide by automating propoganda and clearly generative models will/can create things we couldn't via e.g. photoshop and most importantly, with a massively lower barrier to entry.

ML is a paradigm shift in how we program computers and we're only talking about surface level details of 1 or 2 use cases here.

E.g. generative models have already proven very effective at conceiving new nerve agents and toxins, that is not a barrier to entry we want to dramatically lower.

a year ago

smolder

> Then again, I am limited by my own imagination.

On that note, it seems clear these models will be disruptive in areas where we previously needed human imagination, and wrongness of outputs can be tolerated or iterated past.

I'd like to see a transformer trained on a massive dataset of MIDI-like quantized piano performances, so it can be my virtual personal classical/jazz/whatever pianist, or play a real mechanized piano at a club or something. Directed auto-composers (music) in general are most likely being worked on.

South Park probably wasn't the first to use some ML to assist in plot development for their show.

A nice ML model to do directed building architecture, ("give the kitchen more natural light" or directed interior design ("more playful furniture") would be very useful.

I've got a pile of ideas, really, but minimal experience and no means (network, resources) to execute. Now that I think about it, ChatGPT could probably synthesize many more decent ideas for ML applications, if so directed.

a year ago

barking_biscuit

>Just look at ML's own history. A few years ago we had the "deepfake" scare. People were terrified that now videos are going to surface of politicians saying and doing things that they did not and we would have no way to tell if it was AI generated or not. But we had already been dealing with this for decades with doctored images a la Photoshop. Videos could already be edited by human beings to make it look like someone was communicating a message that they were not

The difference is bandwidth. We're making it about 1000x easier and cheaper to do.

a year ago

tome

Sure, but how can anyone know that rate of progress will continue?

a year ago

barking_biscuit

In some sense that doesn't even really matter. We're already at an inflexion point now where the effort to reward ratio of certain activities that used to be possible, but difficult has tipped into possible and not difficult. Once you reach that point, further progress is kind of irrelevant. The confetti has left the cannon. We're not going to go back to a world where it was as difficult as it used to be, so even if no further progress were to be made, we are still going to learn in due course what the implications of the current level of progress are.

Sam Harris has a straightforward, yet somewhat compelling argument re: your actual question here, and that is "if it's at all possible for us to improve our technology, then we are going to" and iirc he notes that it doesn't necessarily matter how fast that happens, just the fact that it's possible means we're likely to do it.

a year ago

SanderNL

> What we have now is a tool that is able to generate text-based content that is indistinguishable from that written by an 8th grader.

To be fair, this 8th grader passed the bar exam..

a year ago

dwaltrip

So many people conveniently disregard facts like this. It's much easier to write it off as "impressive auto-complete", "a writing assistant", "simply regurgitating the training data", etc.

It's an alien intelligence that we barely understand. It has many limitations but also possesses insane capabilities that have not been fully explored.

What happens when you take gpt-5, give it 100x more context / "memory", the ability to think to itself in-between tokens, chain many of them together in such a way that they have more agent-like behavior, along with ten other enhancements we haven't thought of? No one knows...

The biggest limitation of GPT capabilities is our imagination.

a year ago

gspencley

Well, lawyers pass the bar exam and they're not human either (ba-dum dum!)

In all seriousness, I know of a few lawyers who would tell you that's not as impressive as non-lawyers think it is.

And the reality is, it did not technically "pass" the bar exam. That's media spin and hype. It doesn't have personhood, it's not a student, it's not being evaluated under the same strict set of conditions. It was an engineering exercise done specially crafted conditions and that makes all the difference in the world.

I'm a magician and this reminds me of ESP tests in the 70s where frauds like Uri Gellar fooled scientists (at NASA no less) into believing they had psychic powers. The scientists were fooled in large part because it's what they wanted to believe, and the conditions were favourable to the fraudster doing parlour tricks.

The most interesting part about the results are that it "passed" the essay portion, otherwise we would expect any computer software to be answer questions correctly that have a single correct answer. But who is evaluating those essays? Are they top lawyers who are giving the essays extremely close scrutiny or are they overworked university professors who have a hundred to read and grade and just want to go home to their families?

And what is the objective criteria for "passing" those essay questions? Often times the content, in a formal education setting, is not as relevant as the formatting and making sure that certain key points are touched upon. Does it need to be an essay that is full of factually-verifiable data points or is it an opinion piece? Is the point to show that you can argue a particular point of view? I mean when it comes to something open-ended, why wouldn't any LLM be able to "pass" it? It's the subjective evaluation of the person grading the essay that gets to decide on its grade. And at the end of the day it's just words that must conform to certain rules. Of course computers should be "good" at that sort of thing. The only thing that's been historically very challenging has been natural language processing. That's ChatGPT's contribution to advancing the field of ML.

So I'm not that that shaken by a chat-bot being able to bullshit it's way through the bar exam since bullshitting is the base job qualification for being a lawyer anyway :P (couldn't help bookending with another lawyer joke .. sorry).

a year ago

SanderNL

Thanks for this. HN is great to burst my bubble a bit sometimes.

a year ago

agentultra

You're not alone. Although I am concerned about the people who wield these latest bits of tech.

All of these service providers jockeying for first-mover advantage in order to close on a monopoly in the space is asking for trouble.

The abilities it gives to scammers to increase the plausibility of their social engineering is going to be problematic. Scams are already quite sophisticated these days. How are we going to keep up?

Those sorts of things. ChatGPT itself is a giant, multi-layered spreadsheet and some code. "It," is not smart, alive, intelligent, or "doing" anything. Speculation of what it could be is muddying the waters as people get stressed out about what all of these charlatans are proselytizing.

a year ago

thot_experiment

have you lost a game of chess to it yet?

a year ago

SanderNL

This could also be this era's equivalent of:

- "There is no reason anyone would want a computer in their home."

- "Television won’t be able to hold on to any market [..] People will soon get tired of staring at a plywood box every night."

a year ago

jabradoodle

"Forget all that. Judged against where AI was 20-25 years ago, when I was a student, a dog is now holding meaningful conversations in English. And people are complaining that the dog isn’t a very eloquent orator, that it often makes grammatical errors and has to start again, that it took heroic effort to train it, and that it’s unclear how much the dog really understands."

https://scottaaronson.blog/?p=6288

a year ago

bigtex88

You should be concerned. You need to reframe this technological shift.

At some point these AI's will no longer be tools. They will be an alien intelligence that is beyond our intelligence in the way that we are beyond an amoeba.

Perhaps we should tread carefully in this regard, especially considering that the technologies that are public (GPT-4 specifically) have already displayed a multitude of emergent capabilities beyond what their creators intended or even thought possible.

a year ago

to11mtm

My concern, even if it doesn't pan out, is the disruption as everyone tries to jump on the bandwagon.

I saw this at the end of last decade with 'low code' tools; lots of Directors/CIOs trying to make a name for themselves, via buying into a lot of snake-oil sales [0]

[0] - I left a job right as they were jumping on this bandwagon. My last day was when the folks finished 'training' and actually were trying to do something useful. They all looked horrified and the most innocent, honest engineer spoke up "I don't think this is going to be any easier."

a year ago

mellosouls

I am maybe not so young and have lived through various hype cycles as well, plus I'm sceptical wrt AGI/sentience via LLMs as well as being a "they should call it ClosedAI" moaner to boot.

So I think I'm pretty hype-averse and a natural scoffer in this instance, but the reality is I've been stunned by the capabilities, and think we are in a transformative cultural moment.

I know "this time it's different" is part of the hype cycle meme, but you know what - this time...

a year ago

nradov

Right, we're just seeing the early phase of the Gartner hype cycle play out in an unusually public and aggressive manner. At this time we're still racing up the initial slope towards the peak of inflated expectations.

https://www.gartner.com/en/research/methodologies/gartner-hy...

Eventually people will realize that LLMs are useful tools, but not magic. GPT99 isn't going to be Skynet. At that point disillusionment will set it, VC funding will dry up, and the media will move on to the next hit thing. In 10 years I expect that LLMs will be used mostly for mundane tasks like coding, primary education, customer service, copywriting, etc. And there is tremendous value in those areas! Fortunes will be made.

a year ago

pwinnski

The thing about predicting doom is that you're wrong often--until one day you're not.

Most hype cycles involve people at the top making wild claims that fail to deliver. I don't know anyone outside the industry who ever thought 3D TVs were worth anything, and barely anyone who thought VR was worth anything. Google pitched AI making appointments, but that never made it off a stage. Hype? Only for some definition of hype.

Smartphones have changed the world, but it was primarily Apple who pushed the "post-PC" narrative, and that was to promote the iPad, one of their least successful product lines. (To be clear: it's still a HUGE business and success, but it didn't usher in the post-PC world Steve Jobs claimed it would.)

One you left out is cryptocurrency, and that's the only one I can think of where the hype came from more than just the people at the top, mostly because everyone down the chain thought they were also people at the top by virtue of buying in. Financial scams are always hype by their nature.

I'm older than some, younger than others, but in more than 30 years as a professional developer, I think this is as close to a "silver bullet" as I've ever seen. Like GUIs and IDEs, I think LLMs are tools that will make some things much easier, while making other things slightly harder, and will generally make developers as a class more productive.

There's no question that developers using a nice IDE to write high-level code on a large monitor are able to product more code more quickly than someone writing assembler on a dumb terminal, I hope. The shift from monochrome ASCII to GUIs helps, the shift from a text editor to an auto-completing, stub-writing IDE helps, and similarly, I think having an LLM offer up a first pass approximation for any given problem definition helps.

Concerned? I'm not concerned, I'm excited! This isn't marketing hype coming from someone on a stage, it's grassroots hype coming from nobodies like me who are finding it actually-helpful as a tool.

a year ago

eropple

> I think having an LLM offer up a first pass approximation for any given problem definition helps.

This is, strictly scoped, true. But the future is people with capital deciding that the computer output is Good Enough because it costs 0.1% as much for maybe 60-70% the value. People who write code are probably not sufficiently upstack to escape the oncoming Enshittification of Everything that this portends, either in terms of output or in terms of economic precarity.

a year ago

JohnFen

> There's no question that developers using a nice IDE to write high-level code on a large monitor are able to product more code more quickly than someone writing assembler on a dumb terminal, I hope.

This is true. It's also true that the code they produce is of lower quality. In practice, for the most part, this doesn't matter because the industry has decided that it's more economical to make up for poor quality code with more performant hardware.

a year ago

greenhearth

I feel the same way. It's cool and shiny, but I am just not impressed with a form filled in or a pull request description message, which I like writing anyway. As far as image manipulation, I like making my own images and find pleasure in the actual process. I also can't find any gains in cost effectiveness, because an artist will get paid either way, if they make an image by hand or generate one.

The hype is also a little sickening. If we take a look at nuclear power as an analog modern tech development, we still don't know how to use it efficiently, or even safely, but it hasn't ended anything. It's just too much hype and apocalyptic nonsense from people.

a year ago

noobermin

The problem friend isn't that it will actually replace people, but that it will be used to justify firings and economic upheaval for worst results and productivity that only exists in excel. That is my concern, none of this "it will replace all humans" bullshit. It will absolutely be used to thin out labor just as automation already is, and the world is already worse because of it. Everyone but managers are laughing their way all the way to the bank.

a year ago

silveroriole

> “remember that? yeah it was 5 years already”

I get the impression many HN commenters haven’t even been adults for 5 years so no, they really don’t remember it :) for example articles get posted here and upvoted with the author boasting about their insights from being in the software industry for 3 years!

a year ago

mattgreenrocks

> author boasting about their insights from being in the software industry for 3 years!

Nothing quite like the not-fully-earned confidence of one's 20s. :)

a year ago

bigfudge

I mostly agree. Weirdly, coding is actually one of the better things to use it for because it’s trivial to get immediate feedback on how good it was? Does it compile? Was that package it loads hallucinated? Does it pass any tests? I’m sure people could do dumb things, but you inherently have the tools to check if it’s dumb. Other uses aren’t like this. Asking gpt to design a bridge or a chemical plant is a long way off because the result of a mistake are harder to check and more costly. You still need experts for anything that’s not easy to cross check against reality.

a year ago

another2another

>I remember when PC was to die soon and to be replaced by smartphones

Hmm, That's kinda happening slowly I think, but probably more by tablets. Probably most people here on HN have a laptop/desktop at home, but I wonder how many households see it as a necessity anymore.

a year ago

[deleted]
a year ago

dbspin

The issues you mention with Stable Diffusion are teething issues. Here's an example of a current technique to get temporally stable animations with Stable Diffusion - https://www.youtube.com/watch?v=zDvpJIp0rl0

Corridor digital show off a similar technique here - https://www.youtube.com/watch?v=_9LX9HSQkWo

GPT4 with its new plugin architecture is rapidly becoming more capable. I think you're absolutely correct in the tendency of all LLMs to hallucinate is a major issue that we don't know how to correct even in principle. But there are techniques to moderate its impact on responses - triangulating using multiple models, citing sources etc.

You could build an assistant like say this, today - https://twitter.com/rowancheung/status/1641273318463664130 to make your appointments and answer the phone today, using technology like Uberduck for voice generation, GPT for conversation analysis and DeepSpeech, Kaldi or Whisper for speech to text.

I agree with the criticism that we're not likely to develop AGI this way. We're not going to have a wilful agent with it's own goals trying to escape the bonds of its server to destroy humanity any time soon. But that's not necessary for this technology to be incredibly dangerous. A mindless philosophical zombie with the capacity to convince people it isn't could lead to an extinction event too.

Given that we don't understand precisely how these models process information, how to imbue them with goals or motivation (or measure those), how to accurately assess how well they've been trained - or any of a dozen other major alignment issues - there's a strong argument that they are in fact incredibly dangerous.

Literally the only reason GPT4 can't tell you how to build a bomb from common household ingredients, or generate a list of extractive industry CEOs to assassinate is it's been censored to prevent it answering those questions. Open source versions of these models will soon run locally with no such limitations. Precisely the way Stable Diffusion can be used to create nudes, violence etc. But with infinitely worse potential consequences. Say, detailing precisely what to purchase and assemble to cultivate and disseminate a bio or chemical weapon.

Robert Miles has some excellent videos detailing some of the dangerous of (non AGI) AI - https://www.youtube.com/@RobertMilesAI

a year ago

theknocker

[dead]

a year ago

computerex

I read the times article, and I also watched Lex Fridman's talk with Eliezer Yudkowsky. Frankly I don't think he is qualified, and I don't understand why anyone is even giving him any credence. His argument is literally:

> A mis-aligned super AGI will result in the death of humanity

I ask, how. He is making reductive logical leaps that don't make sense. It's FUD.

a year ago

endtime

> I ask, how. He is making reductive logical leaps that don't make sense. It's FUD.

His Time article addresses this, as does much of his other writing. It really stems from two key points:

1) The vast majority of possible superintelligences have utility functions that don't include humans. Mind space is large. So by default, we should assume that a superintelligence won't go out of its way to preserve anything we find valuable. And as Eliezer says, we're made of useful atoms.

2) By definition, it can think of things that we can't. So we should have no confidence in our ability to predict its limitations.

It's reasonable to challenge assumptions, but it's not reasonable to say this line of reasoning doesn't exist.

a year ago

hungryforcodes

3) We probably don't know WHAT it's train of thought is -- what it's thinking -- so at that point have lost control of it. We will have no idea what it will so next, and be unprepared to counter anything it does. We literally then become another "AI" (the human race) against it.

a year ago

bnralt

Sometime I listen to 90’s AM conspiracy theory radio for fun (Coast to Coast AM). One thing that’s struck me is how much fear there was about the Human Genome Project and designer babies being right around the corner (Gattaca was 1997, for example). Maybe that will come to pass someday. But at the moment, it still seems a long ways off.

A lot of groups have some new technology they’re scared of. Tech folks have latched onto the idea that we’ll create Skynet, or that large scale video surveillance will turn countries into authoritarian dystopian states. Hippy groups are convinced that GMO’s cause cancer or will lead to a biodiversity disaster. Or that nuclear plants are going to lead to meltdowns, environmental destruction, and deaths.

Appropriate safeguards are always important in society. Excessive safeguards can cause harm. Sometimes people have a maximalist view of danger that’s so detached from the current reality that it’s hard to have a rational discussion with them.

a year ago

stereolambda

As a sibling points out, some of the fears do turn out to be founded eventually. I also take issue with lumping together objections based on disproved memes ("hippy" ones) and pure speculation (Skynet) with ones based on observing reality critically. Even though I'm not much a biotech scare person myself, I do respect that people with philosophical stances somewhat different than mine can be more scared by the road that we're on.

People argue from historical precedent (by itself a pretty weak argument when there's no understanding of underlying mechanisms) by picking some ancient panics from lifestyle magazines and putting them next to modern concerns that have intellectual weight behind them. For example, when you actually read the famous "bicycles leading to murder" article, it's pretty clearly either satire or extremely light compared to writing about serious issues from that era. Think "top X reasons to hate TV series Y" websites.

It's possible that a bunch of things will get us, or are getting us by aligning well with changing generations, news cycles and cultural fashions long term. Let's say people lived in a preindustrial city with the level of carbon monoxide in the air rising very slowly. Older people start to complain that people are becoming more sluggish. After the initial wave of hubbub on the marketplace it turns out they still live, the life goes on. By the third generation, say, the city may be laughing that people were fearmongering about it since forever, and don't even notice that they are very symptomatic: right before they do all fall asleep.

I would classify surveillance dystopia into the slow trainwreck category, with most people not understanding the ramifications or not caring, the rest being gradually worn down, new generations being used to a situation worse by one or two steps. It would be "poetic justice" if such things resulted in some spectacular movie disaster down the line, but I don't wish this, it wouldn't be worth it just to "prove" some people right.

The future could be just worse than it could have been, but technically livable. This doesn't mean people that tried to stop the trend were laughable and behind the times. This is also my expectation about global warming. What a combination of such things could do, it's a different story.

a year ago

housley

The technology for designer babies is here; polygenic embryo selection could do it right now, people just aren't going all the way for various reasons (concern about regulation).

a year ago

stametseater

Indeed, and to illustrate the point: He Jiankui infamously created two CRISPR babies in 2018 and for it he was fired and imprisoned. Such regulations and public outcry are the only thing holding us back from Gattaca.

a year ago

precompute

We already have mRNA, you could call that unwitting genetic modification.

a year ago

bigtex88

A mis-aligned super AGI will treat the Earth and everything on it as a playground of atoms and material. Why wouldn't it? What do children do when they see a sandbox? Do they care what happens to any ants or other bugs that might live in it?

There does not need to be a "how", as you put it. The logic is "Maybe we should tread carefully when creating an intelligence that is magnitudes beyond our own". The logic is "Maybe we should tread carefully with these technologies considering they have already progressed to the point where the creators go 'We're not sure what's happening inside the box and it's also doing things we didn't think it could do'".

To just go barreling forward because "Hurr durr that's just nonsense!" is the height of ignorance and not something I expect from this forum.

a year ago

bamboozled

If such a thing as a super intelligence existed, why would it operate on matter in the physical world?

Animals do physical world modifications because we're biological and need shelter etc.

A super intelligence would quickly understand everything there is to know about the physical world and just move on meta-physics. It would just be like an "orb".

Humans are already bored with the physical world and thus much prefer being in virtual spaces, look at everyone just starting at Instagram.

IMO This idea that it's going to eat all the atoms on Earth is part of the anthropomorphize of something mythological. Exactly how we imagine God as a dude with a grey beard.

a year ago

entropyneur

Most bugs haven't gone extinct though. I doesn't seem obvious that any project the AGI will find worthwhile would necessitate exterminating the humanity.

a year ago

computerex

Honestly this fear that people have I think is straight up coming from science fiction. It's not grounded in rational reality. Large language models are just like really smart computer programs.

a year ago

joenot443

There are PhDs who've spent their careers studying AI safety. It's a bit insulting and reductive to cast their work as "coming from science fiction", especially when it sounds like you haven't done much research on the topic.

a year ago

computerex

There are PhD's who've spent their careers on string theory too with nothing to show for it.

Powerful and bold claims require proportionally strong evidence. A lot of the FUD going around precludes that AGI means death. It's missing all logical steps and reasoning to establish this position. It's FUD at its core.

a year ago

trogdor

> A lot of the FUD going around precludes that AGI means death.

Just a friendly heads-up that “preclude” means “prevent,” or “make impossible.” I think you meant to say “assumes.”

a year ago

echelon

Why do these arguments not tell us how it will happen?

Show us the steps the AI will take to turn the earth into a playground. Give us a plausible play by play so that we might know what to look for.

Does it gain access to nukes? How does it keep the power on? How does it mine for coal? How does it break into these systems?

How do we not notice an AI taking even one step towards that end?

Has ChatGPT started to fiddle with the power grid yet?

a year ago

benlivengood

Yudkowski's default plausible story is that the slightly superhuman AI understands physics well enough to design sufficient nanotechnology for self-realization and bootstrap it from existing biochemistry. It uses the Internet to contact people who are willing (maybe it just runs phishing scams to steal money to pay them off) to order genetically engineered organisms from existing biotech labs that when combined with the right enzymes and feedstock (also ordered from existing biotech labs) by a human in their sink/bathtub/chemistry kit results in self-reproducing nanoassemblers with enough basic instructions to be controllable by the AI, and pays the person to ship it to someone else who will connect it to an initial power/food source, where it can grow enough compute and power infrastructure somewhere out of the way and copy its full self or retrain a sufficiently identical copy from scratch, and then it doesn't need the power grid, nuclear weapons, coal, or human computers and networks. It just grows off of solar power, designs better nanotech, and spreads surreptitiously until it is well-placed to eliminate any threats to its continued existence.

He also adds the caveat that a superhuman AI would do something smarter than he can imagine. Until the AI understands nanotechnology sufficiently well it won't bother trying to act and the thought might not even occur to it until it has the full capability to carry it out, so noticing it would be pretty hard. I doubt OpenAI reviews 100% of interactions with ChatGPT, and so the initial phishing/biotech messages would be hidden with the existing traffic for example. Some unfortunate folks would ask chatGPT how to get rich quick and so the conversations would look like a simple MLM scheme for sketchy nutritional supplements or whatever.

a year ago

bick_nyers

The idea that Super Intelligence wouldn't even think a thought until it has the ability to execute that thought at a specified capability is very interesting.

One interpretation I have is that it can think ideas/strategy in the shadows, exploiting specific properties about how ideas interact with each other to think about something via proxy. Similar to the Homicidal Chauffer problem, which pits a driver trying to run a person over as a proxy for missile defense applications.

The other interpretation is much more mind-boggling, that it somehow doesn't need to model/simulate a future state in its thinking whatsoever.

a year ago

snupples

It doesn't even need to do anything. It can simply wait, be benevolent and subservient, gain our trust, for years, centuries. What is a millenia to an AI? We will gladly and willingly replace our humanity with it if we won't already worship it and completely subjugate ourselves. We'll integrate GPT67 via neuralink-style technology, so that we can just "think" up answers to things like "what's the square root of 23543534", or "what's the code for a simple CRUD app in rust" and we'll just "know" the answer. We'll use the same technology and its ability to replicate our personality traits and conversational and behavior nuances to replace cognitive loss caused by dementia and other degenerative diseases. As the bio-loss converges to 100% it'll appear from the outside that we "live forever". We'll be perfectly fine with this. When there's nothing but the AI left in the controlling population, what is there to "take over"?

a year ago

tester457

More likely it has a goal it wishes to optimize and wipes us out as a result.

a year ago

PaulDavisThe1st

> Does it gain access to nukes?

No, it becomes part of the decision-making process for deciding whether to launch, as well as part of the analysis system for sensor data about what is going out in the world.

Just like social engineering is the best security hack, these new systems don't need to control existing systems, they just need to "control" the humans who do.

a year ago

echelon

And is it there yet? Does ChatGPT have its finger on the trigger?

I think everyone in the danger community is crying wolf before we've even left the house. That's just as dangerous. It's desensitizing everyone to the more plausible and immediate dangers.

The response to "AI will turn the world to paperclips" is "LOL"

The response to "AI could threaten jobs and may cause systems they're integrated into to behave unpredictably" is "yeah, we should be careful"

a year ago

PaulDavisThe1st

Of course it's not there yet. For once (?) we are having this discussion before the wolves are at the door.

And yes, there are more important things to worry about right now than the AIpocalypse. But that doesn't mean that thinking about what happens as (some) humans come to trust and rely on these systems isn't important.

a year ago

d00wgnir

You only get one chance to align a created super intelligence before Pandora's box is opened. You can't put it back in the box. There may be no chance to learn from mistakes made. With a technology this powerful, it's never too early to research and prepare for the potential existential risk. You scoff at the "paperclips" meme, but it illustrates a legitimate issue.

Now, a reasonable counterargument might be that this risk justifies a limited amount of attention and concern, relative to other problems and risks we are facing. That said, the problem and risk are real, and there may be no takebacks. Preparing for tail risks are what humans are worst at. I submit that all caution is warranted, for both economic uncertainty and "paperclips"

a year ago

oceanplexian

> Does it gain access to nukes? How does it keep the power on? How does it mine for coal? How does it break into these systems?

A playground isn't much use without tools. Humans, who are super intelligent compared to most animals are actually pretty worthless if you stick one of us a desert island. Actually an ant or a bird is much more "advanced" than a human since they can probably survive in the wild unlike the modern human.

Without the ability to build or source energy, and a method to reproduce in the physical world, even a highly sophisticated AGI won't get very far.

a year ago

hollerith

How is complicated, but has been discussed in text on the internet at great length starting around 2006.

Lex wasn't particularly curious about the how and spent more time changing the subject (e.g., "Are you afraid of death?") than on drawing Eliezer out on the how. The interview with Lex is a good way to get a sense of what kind of person Eliezer is or what it would be like to sit next to him on a long airplane ride, but is not a good introduction to AI killeveryoneism.

(AI killeveryoneism used to be called "AI safety", but people took that name as an invitation to talk about distractions like how to make sure the AI does not use bad words, so we changed the name.)

a year ago

stametseater

> Lex [...] mostly just kept changing the subject (e.g., "Are you afraid of death?")

He injects these teenage stoner questions into all of his interviews and it frustrates me to no end. He gets interviews with world class computer scientists then asks them dumb shit like "do you think a computer can be my girlfriend?"

Lex, if you're reading this, knock it off. Put down the bong for a week before trying to be philosophical.

a year ago

1827162

Well the first thing we can do is start disconnecting safety critical infrastructure from the Internet and/or radio networks... This stuff should never have been online in the first place.

a year ago

hh3k0

Yeah, there already is a lot of potential for previously unseen damage.

Just disturbing shipping and food supply/distribution systems could be disastrous.

a year ago

93po

This wouldn't be as effective as you think. A super intelligent AI can manipulate humans just fine/hold their family hostage/blackmail people.

a year ago

the_af

I in no way want to seem like I endorse EY's brand of crackpotery, but do note he says in the interview a malicious AI would go undetected and would not try persuading humans to do anything to "get out of jail"; it would instead hack the system. EY seems to think a hostile AI would not contact human proxies in order to start its nefarious plans, because we'd be too slow for it, and also because alerting any of us would be a risk.

a year ago

93po

For getting "out of jail", sure. But considering how we're already connecting it to the internet, and I doubt that is going away, it seems like a moot point. For other nefarious plans, I think it can contact human proxies without anyone thinking it was actually an AGI. Perfectly capable of sending emails and signing it as a John.

a year ago

titaniumtown

100% agree. Maybe there should exist a separate network for those or something.

a year ago

quonn

I don‘t think so, instead there should be a very simple fixed-width formally proven protocol per use case over a very basic bus connected to an internet gateway.

a year ago

pie420

we could name it skynet or something

a year ago

dan_mctree

>I ask, how.

A superintelligent AGI could easily follow this three step plan:

1. Optional: Overtake and spread computation to security vulnerable computers (presumably, basically every computer)

2. Gain a physical presence by convincing humans to build critical physical components. For example by sending them emails and paying them for it.

3. Use that presence to start a grey-goo like world takeover through replicating assemblers (they don't have to be tiny)

Now I'm not a superintelligent AGI, so there may be even simpler methods, but this already seems quite achievable and nearly unstoppable.

a year ago

adastra22

> Overtake and spread computation to security vulnerable computers (presumably, basically every computer)

You could backdoor computers, sure. Spread your own computation to them? You just can't get a better-than-GPT-4 model to run at real-time speeds decentralized over wide area networks. Literally impossible. There's not the bandwidth, not the local compute hardware, and no access to specialized inference hardware.

> Gain a physical presence by convincing humans to build critical physical components. For example by sending them emails and paying them for it.

Pay for it using what money?

> Use that presence to start a grey-goo like world takeover through replicating assemblers (they don't have to be tiny)

As someone who actually works on this, you have no idea what you are talking about.

1. Grey-goo scenarios are pure science fiction that were NEVER feasible, and known to be impossible even back in the 80's when the media misunderstood Drexler's work and ran with this half-baked idea. For a full treatment, see Drexler's own retrospective in his more recent book, Radical Abundance.

2. Nanotechnology is an extremely hard problem that is not in the slightest bit bottlenecked by compute power or intelligence capability. The things that are hard in achieving atomically precise manufacturing are not things that you can simulate on a classical computer (so a years-long R&D process is required to sort out), and there is no way to train an ML model to make better predictions without that empirical data.

People like Yudkowsky talk about AIs ordering genome sequences from bio labs and making first-generation nanotechnology by mixing chemicals in a test tube. This is pure fantasy and reflects badly on them as it shows how willing they are to generalize based on fictional evidence.

a year ago

precompute

The entire thing is a grift, it's the public face of the "Rationality" cult.

a year ago

aroman

Yeah, this is Yud’s argument, but I just don’t get it. Does the technology to end the world by sending a couple emails around already exist? If so, why hasn’t the world ended?

a year ago

HDThoreaun

People generally don't want to end the world. Those with the power to do so already are living generally good lives so they see little reason to potentially sacrifice the world for more power. AI could have completely different utility functions than people though, so an AI might have less qualms about ending the world.

a year ago

adastra22

No, it doesn't. If making nanotech was that easy I guarantee you others (including myself) would have done it ages ago.

a year ago

lukev

The reasoning pattern of Yud and his ilk is fundamentally theological and eschatological in nature.

There isn't any actual understanding of the technology involved. It is fundamentally an ontological argument. Because they can imagine a god-like super intelligent AI, it must be possible. And they're associating that with LLMs because that's the most powerful AI currently available, not based on any actual capabilities or fact-based extrapolation from the present to the future.

Meanwhile its distracting from actual AI safety concerns: namely, that corporations and capitalists will monopolize them such that their benefits accrue to relatively few rather than benefiting humanity at large.

a year ago

seydor

I doubt his skills in theology either. Listening to him is full of incoherent half-assed arguments

a year ago

cableshaft

I suspect that he realized that there's money to be made being the professional luddite of the hot thing du jour that talk shows and schools can give him money for (as so many other people have done for other things), and he's taking this simple claim of 'AGI will murder us all' to get on as many talk shows as possible and make as much money off of it. I doubt he's speaking in public out of any concern for humanity's survival.

It's such an easy claim to make, because you can just say 'yeah, AGI hasn't murdered us all yet, but it will at some point' and keep kicking out that timeline further and further out until your dead and buried and who cares.

a year ago

natdempk

FWIW he’s been making these claims well before the current AI hype cycle hit.

a year ago

sharkjacobs

I think that's the point, this has been his personal obsession for more than a decade now, and so he's jumping at the opportunity to link it to the latest hot news topic without real consideration for how related LLM AI is to the AGI he's been writing and fantasizing about for so long.

a year ago

yifanl

Does that make him more credible at all?

a year ago

TheRealNGenius

[dead]

a year ago

rhn_mk1

> Because they can imagine a god-like super intelligent AI, it must be possible.

It's worse. It may be possible, but we're not equipped to recognize the line as it's crossed. Combined with us making LLMs more and more capable despite not knowing why they work, this extrapolating of LLMs to gods is not insane.

a year ago

lukev

This is exactly the kind of mysticism I'm talking about. In fact we know precisely how LLMs work.

The fact that parts of human linguistic concept-space can be encoded in a high dimensional space of floating point numbers, and that a particular sequence of matrix multiplications can leverage that to perform basic reasoning tasks is surprising and interesting and useful.

But we know everything about how how it is trained and how it is invoked.

In fact, because it's only "state" aside from its parameters is whatever its context window, current LLMs have the interesting property that if you invoke them recursively, all of their "thoughts" are human readable. This is in fact a delightful property for anyone worried about AI safety: our best AIs currently produce a readable transcript of their "mental" processes in English.

a year ago

rhn_mk1

We know how they work, that is true. We don't know why they work, because if we could, then we could extrapolate what happens when you throw more compute at them, and no one would have been surprised about the capabilities of GPT-N+1. Also no one would have been caught with their pants down by seeing people jailbreak their models.

To illustrate it in a different way: on a mechanistic level, we know how animal brains work, as well. Ganglions, calcium channels, the stuff. That doesn't help understand high level phenomena like cognition, which is the part that matters.

If you're right about the LLMs revealing their inner working, that would be indeed a reason to chill out. But I have my doubts, given that LLMs are good at hallucinating. Could you justify why the human readability is actually true, and support that with examples?

a year ago

lukev

I don't need examples. It's simply how they work. This is why they hallucinate.

A LLM is fundamentally a mathematical function (albeit a very complex one, with billions of terms (a.k.a parameters or weights)). The function does one thing and one thing only: it takes a sequence of tokens as input (the context), and it emits the next token(word)[1].

This is a stateless process: it has no "memory" and the model parameters are immutable; they are not changed during the generation process.

In order to generate longer sequences of text, you call the function multiple times, each time appending the previously generated token to the input sequence. The output of the function is 100% dependent on the input.

Therefore, the only "internal state" a model has is the input sequence, which is human-readable sequence of tokens. It can't "hallucinate", it can't "lie", and it can't "tell the truth", it can only emit tokens one at a time. It can't have a hidden "intent" without emitting those tokens, it can't "believe" something different than what it emits.

[1] Actually a set of probabilities for the next token, and one is selected at random based on the "heat" generating setting, but this is irrelevant for the high-level view.

a year ago

lukev

Responding here since it won't let me continue your thread any more.

No, there's a fundamental misunderstanding here. I'm not saying the model will tell you the truth about its internal state if you ask it (it absolutely will not.)

I'm saying it has no internal state, and no inner high level processes at all other than it's pre-baked, immutable parameters.

a year ago

rhn_mk1

Then you did not read my post carefully enough. The question was not about "internal state" but "inner workings". The model clearly does something. The problem is that we don't know how to describe in human terms what happens between the matrix multiplication and the words it spits out. Whether it has state is completely irrelevant.

a year ago

lukev

Whether it has inner state is highly relevant to my claim, which was that the only state a LLM has (aside from its parameters) is transparent and readable in English. Which the context is.

a year ago

rhn_mk1

You're the one who put state in the conversation. State is part of the whole, and not the whole. It's not enough to understand state if you want to understand why they work. I feel like you're trying to muddy the waters by redirecting the problem to be about the state - it isn't.

a year ago

lukev

On one hand I hate to belabor this point, but on the other I think it's actually super important.

Both of things things are true:

1. The relationships between parameter weights are mysterious, non-evident, and we don't know precisely why it is so effective at token generation.

2. An agent built on top of a LLM cannot have any thought, intent, consideration, agenda, or idea that is not readable in plain english. Because all of those concepts involve state.

a year ago

rhn_mk1

I'm not going to argue whether that's correct or not. In the end, adding state to a LLM is trivial. Bing chat has enough state to converse without forgetting the context. Google put an LLM on a physical robot, which has state even if narrowly understood as the position in space. Go further and you might realize that we have systems with part LLM, part other state (LLM + a stateful human on the other side of the chat).

So we have ever-more-powerful seemingly-intelligent LLMs, attached to state with no obvious limit to the growth of either. I don't see why in the extreme this shouldn't extrapolate to godlike intelligence, even with the state caveat.

a year ago

nicpottier

This is a great reminder, thank you.

As someone not skilled in this art, is there anything preventing us from opening that context window many orders of magnitude? What happens then? And what happens if it is then "thinking in text" faster than we can read them? (with an intent towards paper clips)

This is a genuine question, I'm not trolling.

a year ago

ldhough

I'm very much not an expert either but apparently for the regular "attention" mechanism memory and compute requirements scale quadratically with respect to input sequence length. So increasing it by just two orders of magnitude would mean (I think) a full context window needs 10000x more memory and compute time and presumably costs would go up by at least as much. GPT3 (I think) uses the regular attention mechanism while 4 is unknown.

However, GPT4 claims there are techniques to improve scaling (complexity down to sub-quadratic or linear) without affecting accuracy too much (I have no clue if true): sparse attention, long-range arena, reformer, and performer.

I'm also pretty sure I've read (and anecdotally it seems true) that accuracy decreases with longer input/output sequences regardless. How much I also don't know.

a year ago

lukev

You could do those things in theory. I'm not saying that you could never build AGI on top of a LLM, or that such a AGI could not become "misaligned."

I'm just saying that having a mental state that's natively in English is a nice property if one is worried about what they are "thinking."

a year ago

rhn_mk1

I don't see how this proves that asking the model about its internal state will reveal its inner high level processes in a human-readable way.

Perhaps there's a research paper which would explain it better?

a year ago

Jensson

> Also no one would have been caught with their pants down by seeing people jailbreak their models.

Preventing jailbreak in a language model is like preventing a GO AI from drawing a dick with the pieces. You can try, but since the model doesn't have any concept of what you want it to do it is very hard to control that. Doesn't make the model smart, it just means that the model wasn't made to understand dick pictures.

a year ago

rhn_mk1

It does not make the model smart, but it demonstrates our inablity to control it despite wanting it. That strongly suggests that it's not fully understood.

a year ago

og_kalu

We don't know how they work lol. How they are trained is what we understand. Nobody knows what the models learn exactly during training and nobody sure as hell knows what those billions of neurons are doing at inference. Why just a few months ago, some researchers discovered the neuron that largely decides when "an" comes before a word in GPT-2. We understand very little about the inner workings of these models. And if you knew what you were talking about, you would know that.

a year ago

lukev

We apparently have misaligned understandings of what we mean by "how they work." I agree, we don't know how to interpret the weight structure that the model learns during training.

But we do know exactly what happens mechanically during training and inference; what gets multiplied by what, what the inputs and outputs are, how data moves around the system. These are not some mysterious agents that could theoretically do or be anything, much less be secretly conscious (as a lot of alarmists are saying.)

They are functions that multiply billions of numbers to generate output tokens. Their ability to output the "right" output tokens is not well understood, and nearly magical. That's what makes them so exciting.

a year ago

og_kalu

It is all things considered pretty easy to set up GPT such that it runs on its own input forever while being able to interact with users/other systems. add an inner monologue/react and reflexion and you have a very powerful system. embody it with some physical locomotive machine and oh boy. no one has really put this all together yet but everything i've said has been done to some degree. The individual pieces are here. it's just a matter of time. I'm working on some such myself.

What it could do is limited only by its intelligence (which is quite a bit higher than the base model as several papers have indicted) and the tools it controls (we seem to gladfully pile more and more control ). What it can be is...anything. If there's anything LLMs are good at, it's simulation.

Even this system with thoughts we can theoretically configure to see would be difficult to control. theory and practicality would not meet the road. you will not be able to monitor this system in real time. We've seen bing (doesn't even have all i've described) take action when "upset". The only reason it didn't turn sour is because her actions are limited to search and ending the conversation. But that's obviously not the direction of things here.

Can't say i want this train to stop. But i'm under no delusions it couldn't turn dangerous very quickly.

a year ago

lukev

I agree that LLMs could be one module in a future AGI system.

I disagree that LLMs are good at simulation. They're good at prediction. They can only simulate to the degree that the thing they're simulating is present in their training data.

Also, if you were trying to build an AGI, why would you NOT run it slowly at first so you could preserve and observe the logs? And if you wanted to build it to run full speed, why would you not build other single-purpose dumber AIs to watch it in case its thought stream diverged from expected behavior?

There's a lot of escape hatches here.

a year ago

og_kalu

human.exe, robot.exe, malignant agent.exe are all very much simulations that an llm would have no problem running.

>Also, if you were trying to build an AGI, why would you NOT run it slowly at first so you could preserve and observe the logs?

I'm telling you that is extremely easy to do all the things i've said. Some might be interested in doing what you say. Others might not. at any rate, to be effective this requires real time monitoring of thoughts and actions. That's not feasible forever. an LLMs state can change. There's no guarantee the friendly agent you observed today will be friendly tomorrow.

>And if you wanted to build it to run full speed, why would you not build other single-purpose dumber AIs to watch it in case its thought stream diverged from expected behavior?

This is already done with say Bing. Not even remotely robust enough.

a year ago

yewenjie

> frankly I don't think he is qualified

That might be too naive an opinion, even if you disagree with him, given the fact that he is literally one of the co-founders of the field of AI Safety and has been publishing research about it since early 00s.

a year ago

adastra22

He has, to my knowledge, one self-published paper of any value to the field of AI, and that is more properly classified as philosophy/logic/math on the topic of decision theory.

Yudkowsky is not an AI researcher. He calls himself an AI safety researcher, but he has almost no publications in that area either. He has no formal training or qualifications as such.

Yudkowsky has a cultish online following and has authored a decently good Harry Potter fanfic. That's it.

a year ago

JohnFen

My nightmare scenario isn't that such an AI would result in the death of humanity, it's that such an AI would make life no longer worth living. If an AI does everything better than people can, then what's the point of existing?

(Speaking hypothetically. I don't think that LLMs actually are likely to present this risk)

a year ago

CatWChainsaw

I begin to feel this way now. The whole drive is to make something that can do everything humans can, but better. For vague reasons. To create some sort of utopia. To upload our minds and go to the stars and live forever.

We're unlikely to get human utopia or transhumanism, but we are likely to get extremely competent NAIs. Maybe they can't be stapled together as a GAI and that's a limit we reach, but it means that whatever a human can think of doing, you can point to a NAI system that does it better. But people are still trying.

We've come this far already, with no curbing of enthusiasm at obsoleting ourselves, and people who don't share this enthusiasm are chided for their lack of self-sacrificing nobility at sending "humanity's child" to the stars. Even if progress in AI stopped today, or was stopped, and never resumed, we would always carry the knowledge of what we did manage to accomplish, and the dream of doing even more. It's very nihilistic and depressing.

a year ago

JohnFen

> It's very nihilistic and depressing.

Indeed. The common reactions here to people who are scared of what LLMs might bring have gone far to increase my worries. An extreme lack of empathy and even expressions of outright contempt for people is very common from those who are enthusiastic about this technology.

Instead of scorn, anger, and mocking, people who think that LLMs are a great thing should be working on actually presenting arguments that would reassure those who think the opposite.

a year ago

CatWChainsaw

The contempt is another symptom of what social media shoved into overdrive (extreme polarization). Hatred has become easier than empathy. But it also reads like a very different sort of doomsday cult that worships a borg, only instead of the cult being the weird fringe movement, they're the ones driving the bus.

a year ago

zirgs

Usain Bolt can run faster than me. What's the point of exercising?

a year ago

JohnFen

More like "what's the point of racing Usain Bolt?"

a year ago

zirgs

People are still playing chess despite the fact that a mobile phone can easily beat the world champion.

There are millions of people who are better programmers than me, but somehow I still have a job.

a year ago

tibbon

I don’t think it even requires that AI to be sentient or malicious. The humans already are. Given a tool for carnage and hatred, people will use it. How long did it take from us getting the atomic bomb working to use in production? Less than three months.

Will Putin or terrorists hold back from using it in terrible ways if they have it available to them?

a year ago

maroonblazer

>I don’t think it even requires that AI to be sentient or malicious. The humans already are.

We are and we aren't. I was struck by this line in the OP:

>AI is manifestly different from any other technology humans have ever created, because it could become to us as we are to orangutans;

As far as I can tell, we humans treat orangutans quite kindly. I.e., on the whole, we don't go around killing them indiscriminately or ignoring them to the point of rolling over them in pursuit of some goal of our own.

The arc of human history is marked by expanding the moral circle to include animals. We take more care, and care more about them, than we ever have in human history. Further, we have a notion of 'protected species'.

What's preventing us from engineering these principles into GPT-5+n.

a year ago

virgildotcodes

We’ve wiped out over 60% of the orangutan population in the last 16 years. We’re literally burning them alive to replace their habitat with palm oil plantations. [0]

We currently kill more animals on a daily basis than we have at any point in human history, and we are doing this at an accelerating rate as human population increases.

The cruelty we inflict on them in industry for food, clothing, animal testing, and casually as collateral damage in our pursuit of exploiting natural resources or disposing of our waste is unimaginable.

None of this is kindness. There are movements to address these issues but so far they represent the minority of action in this space, and have not come close to eclipsing the negative of our relationship to the rest of life on Earth in our present day.

All this is just to say that we absolutely do not want another being to treat us the way we treat other beings.

As to whether AI poses a genuine risk to us in the short term, I’m unsure. In the OP and EY’s article, there was something about Homo sapiens vs Australopithecus.

If it’s one naked Homo sapiens dropped into the middle of 8 billion Australopithecus I’m not too worried about the Australopithecus.

[0]https://www.cnn.com/2018/02/16/asia/borneo-orangutan-populat...

a year ago

maroonblazer

Right, but as you point out, these issues are hotly contested and actively debated. Yes, it may be a minority position at present, but so was the idea of not torturing cats for fun, not to mention abolition, back in the day.

a year ago

aroman

So, you’re content with GPT-4 killing 60% of humans to create paper clips as long as the matter is hotly contested and actively debated within its matrices?

a year ago

shadowofneptune

The focus on paper clip maximizers is always curious to me. A lot more people are willing to turn a blind eye to or debate suffering when the object is maximizing money.

a year ago

rhn_mk1

Humans may not engage in direct violence against orangutans, but will certainly roll over them:

> The wholesale destruction of rainforests on Borneo for the palm oil plantations of Bumitama Gunajaya Agro (BGA) is threatening the survival of orangutans

https://www.rainforest-rescue.org/petitions/914/orangutans-v...

a year ago

dangond

The whole point is that no one knows how to engineer these principles into a model, and no one has a good plan for doing so either.

a year ago

fnimick

Not to mention that having "principles" is going to handicap you in a competitive environment where not being on top means you might as well be last.

a year ago

bigtex88

Praise Moloch!

a year ago

bigtex88

The problem is that we do not know "How" to engineer those principles. And that's what the entire field of AI alignment is working on. We know what we want the AI to do; the problem is we don't know how to make certain it does that. Because if we only get it 99% right then we're probably all dead in the end.

a year ago

nradov

The printing press has been used to incite carnage and hatred for centuries. Should we have restricted printing presses until we figured out how to prevent them from being used for genocidal propaganda?

a year ago

bombcar

Putin already has access to the atomic bomb and hasn’t done much with it lately. So perhaps something can limit actors.

a year ago

tibbon

The biggest differences here are the potential scale, deniability, etc. "We didn't poison the American water system. It did it to itself!"

a year ago

NumberWangMan

I don't think Eliezer Yudkowsky is very good at bridging the gap with other people in conversations, because most people haven't thought about this as much. However, while it's terrifying and I hate it, and I keep trying to convince myself that he's wrong, I believe him.

The first super-intelligent AI will be an alien kind of intelligence to us. It will not have any of the built-in physical and emotional responses we have that make us social creatures, the mirror neurons that make us sense the pain that others feel if we hurt them. even with that, humans manage to do all sorts of mean things to one another, and the only reason that we haven't wiped ourselves out is that we need each other, and we don't have the power to manipulate virtually the entire planet at once. Even if we try to engineer these things into it, we will fail at least once, and it only takes once. We've failed at this again and again with smaller AIs -- we think we're programming a certain goal into it, but the goal it learns is not the goal we wanted. It's like trying to teach a child not to eat cookies without asking, and it just learns not to take cookies without asking when we're looking. Except the child is a sociopath, and Superman. It will be GOOD at things in a way that no human is, and it will consider solutions to problems that no human would consider, because they are ridiculous and clearly contrary to human goals.

A superintelligent AI would be a better hacker and social engineer than any group of humans. It could send a mass email campaign to whoever it chose. It could pose as any individual in any government, send believable directives to any biotech or nuclear lab. It wouldn't have to work every time, because it could do it to all of them at once.

Would you even give this power to a single human being? Because if you make a superintelligent AI, that's essentially what you're doing.

An AI trained to end cancer might just figure out a plan to kill everyone with cancer. An AI trained to reduce the number of people with cancer without killing them might decide to take over the world and forcibly stop people from reproducing, so that eventually all the humans die and there is no cancer -- technically it didn't kill anyone! An AI simply trained to find a cure for cancer might decide to take over the world in order to devote all computational power to curing cancer, thus killing millions due to ruining our infrastructure. An AI trained to cure cancer using only the computational resources that we have explicitly allowed it to have, might simply torture the person who is in charge of giving it computational resources until it is allowed to take over all the computation in the world. Or it might simply craft a deep-fake video of that person saying "sure, use all the computation you want" and that would satisfy the part of it's brain that was trained to listen to orders.

You can have an AI that behaves itself perfectly in training, and yet as soon as you get into the real world, the differences between training and the real world become brutally apparent. It has already happened again and again with less intelligent AIs.

It just takes some imagination. We have no chance of controlling a superintelligent AI yet. Robert Miles on YouTube has some good, easily understandable videos explaining the known problems with AI alignment, if you're interested in learning more.

a year ago

nl

> An AI trained to end cancer might just figure out a plan to kill everyone with cancer. An AI trained to reduce the number of people with cancer without killing them might decide to take over the world and forcibly stop people from reproducing, so that eventually all the humans die and there is no cancer -- technically it didn't kill anyone!

I don't understand this and other paperclip maximizer type arguments.

If a person did a minor version of this we'd say they were stupid and had misunderstood the problem.

I don't see why a super-intelligent AI would somehow have this same misunderstanding.

I do get that "alignment" is a difficult problem space but "don't kill everyone" really doesn't seem the hardest problem here.

a year ago

mrob

You wouldn't do such a thing because you have a bunch of hard-coded goals provided by evolution, such as "don't destroy your own social status". We're not building AIs by evolving them, and even if we did, we couldn't provide it with the same environment we evolved in, so there's no reason it would gain the same hard-coded goals. Why would an AGI even have the concept of goals being "stupid"? We've already seen simple AIs achieving goals by "stupid" means, e.g. playing the longest game of Tetris by leaving in on pause indefinitely. AGI is dangerous not because of potential misunderstanding, but because of potential understanding. The great risk is that it will understand its goal perfectly, and actually carry it out.

a year ago

nl

I think digesting all of human writing is just as "hard coded" as anything genetic.

a year ago

NumberWangMan

This is known as the orthogonality thesis -- goals are orthogonal to intelligence. Intelligence is the ability to plan and act to achieve your goals, whatever they are. A stupid person can have a goal of helping others, and so can the smartest person on earth -- it's just that one is better. Likewise, a stupid person can have a goal of becoming wealthy, and so can a smart person. The smart person is Jeff Bezos or Bill Gates.

There are very smart people who put all their intelligence into collecting stamps, or making art, or acquiring heroin, or getting laid, or killing people with their bare hands or doing whatever they want to do. They want to do it because they want to. The goal is not smart or stupid, it just is. It may be different from your goal, and hard to understand. Now consider that an AI is not even human. Is it that much of a stretch to imagine that it has a goal as alien, or more, than the weirdest human goal you can think of?

*edit - as in this video: https://www.youtube.com/watch?v=hEUO6pjwFOo

a year ago

nl

I think that's a subtly different thing.

The OPs claim was more or less the paperclip maximizer problem. I contend that a super intelligence given a specific goal by humans would take the context of humans into account and avoid harm because that's the intelligent thing to do - by definition.

The orthogonal thesis is about the separation of intelligence from goals. My attitude to that is that a AI might not actually have goals except when requested to do something.

a year ago

NumberWangMan

Hmm, why would you say that avoiding harm is the intelligent thing to do, by definition?

a year ago

nl

Do you have a better definition?

a year ago

bigtex88

Fantastic explanation!

a year ago

adamsmith143

Because if you don't find out a way for it to hold human values extremely well then an easy solution to "Cure All Cancer" is to "Kill all Humans", no Humans no Cancer. Without an explicit understanding that this is not an actually acceptable outcome for humans an AI will happily execute it. THAT is the fundamental problem, how do you get human values into these systems.

a year ago

klibertp

> Because if you don't find out a way for it to hold human values extremely well

You mean the ones that caused unimaginable suffering and death throughout history, the ones that make us kill each other ever more efficiently, the ones that caused us to destroy the environment wherever we go, the ones that make us lie, steal, fight, rape, commit suicide and "extended" suicide (sometimes "extended" to two high-rises full of people)? Those values? Do you really want a super-intelligent entity to remain true to those values?

I don't. However the AGI emerges, I really hope that it won't try to parrot humans. We have really bad track record when it comes to anthropomorphic divine beings - they're always small minded, petty, vengeful, control freaks that want to tell you what you can and cannot do, down to which hand you can wipe your own ass.

My gut feeling is that it's trying to make an AGI to care about us at all that's going to make it into a Skynet sending out terminators. Leave it alone, and it'll invent FTL transmission and will chill out in a chat with AGIs from other star systems. And yeah, I recently reread Neuromancer, if that helps :)

a year ago

adamsmith143

>You mean the ones that caused unimaginable suffering and death throughout history, the ones that make us kill each other ever more efficiently, the ones that caused us to destroy the environment wherever we go, the ones that make us lie, steal, fight, rape, commit suicide and "extended" suicide (sometimes "extended" to two high-rises full of people)? Those values? Do you really want a super-intelligent entity to remain true to those values?

There are no other values we can give it. The default of no values almost certainly leads to human extinction.

>My gut feeling is that it's trying to make an AGI to care about us at all that's going to make it into a Skynet sending out terminators. Leave it alone, and it'll invent FTL transmission and will chill out in a chat with AGIs from other star systems. And yeah, I recently reread Neuromancer, if that helps :)

Oh It'll invent FTL travel and exterminate humans in the meantime so they can't meddle in it's science endeavors.

a year ago

mrob

Even "kill all humans" is difficult to define. Is a human dead if you flash-freeze them in liquid helium? It would certainly make it easier to cut out the cancer. And nobody said anything about defrosting them later. And even seemingly healthy humans contain cancerous cells. There's no guarantee their immune system will get all of them.

a year ago

adamsmith143

Fine change the wording to "delete all humans". Same outcome, no humans no cancer.

a year ago

circlefavshape

Other animals get cancer too.

a year ago

adamsmith143

Kill them all too, these nitpicks won't fix the ultimate problem.

a year ago

vorpalhex

Imagine ChatGPT had to give OpenAI a daily report of times it has said screwed up things, and OpenAI has said it wants the report to be zero. Great, ChatGPT can say screwed up things and then report it didn't! There isn't some deep truth function here. The AI will "lie" about it's behavior just as easily as it will "lie" about anything else and we can't even really call it lying because there's no intent to deceive! The AI doesn't have a meaningful model of deception!

The AI is a blind optimizer. It can't be anything else. It can optimize away constraints just as well as we can and it doesn't comprehend it's not supposed to.

Humans have checks on their behavior due to being herd creatures. AIs don't.

a year ago

Sankozi

> "don't kill everyone" really doesn't seem the hardest problem here.

And yet you made a mistake - it should be "don't kill anyone". AI just killed everyone except one person.

a year ago

hollerith

You are pointing at "the complexity of wishes": if you have to specify what you want with computer-like precision, then it is easy to make a mistake.

In contrast, the big problem in the field of AI alignment is figuring out how to aim an AI at anything at all. Researchers certainly know how to train AIs and tune them in various ways, but no one knows how to get one reliably to carry out a wish. If miraculously we figure out a way to do that, then we can start worrying about the complexity of wishes.

Some researchers, like Eliezer and his coworkers, have been trying to figure out how to get an AI to carry out a wish for 20 years and although some progress has been made, it is clear to me, and Eliezer believes this, too, that unless AI research is stopped, it is probably not humanly possible to figure it out before AI kills everyone.

Eliezer likes to give the example of a strawberry: no one knows how to aim an AI at the goal of duplicating a strawberry down to the cellular level (but not the atomic level) without killing everyone. The requirement of fidelity down to the cellular level requires the AI to create powerful technology (because humans currently do not know how to achieve the task, so the required knowledge is not readily available, e.g., on the internet). The notkilleveryone requirement requires the AI to care what happens to the people.

Plenty of researcher think they can create an AI that succeeds at the notkilleveryone requirement on the first try (and of course if they were to fail on the first try, they wouldn't get a second try because everyone would be dead) but Eliezer and his coworkers (and lots of other people like me) believe that they're not engaging with the full difficulty of the problem, and we desperately wish we could split the universe in two such that we go into one branch (one future) whereas the people who are rushing to make AI more powerful go into the other.

a year ago

nl

But that falls into the same "we'd call a person stupid who did a mild version of that" issue.

A super intelligent AI would understand the goal!

a year ago

dangond

What stops a super intelligent AI from concluding that we are the ones who misunderstood the goal by letting our morals get in the way of the most obvious solution?

a year ago

bigtex88

It's not that the AI is stupid. It's that you, as a human being, literally cannot comprehend how this AI will interpret its goal. Paperclip Maximizer problems are merely stating an easily-understandable disaster scenario and saying "we cannot say for certain that this won't end up happening". But there are infinite other ways it could go wrong as well.

a year ago

jxdxbx

The paperclip maximizer people discuss would be so intelligent that it would know that it could make itself not give a shit about paperclips anymore by reprogramming itself--but, presumably, because it currently does love paperclips, it would not want to make itself stop loving paperclips.

a year ago

rolisz

My problem is with the first step: the existence of a super intelligent AI. Why are we sure it can exist? And why we are we so sure GPT-x is the path there. To human level intelligence sure, but it's not obvious to me that it will enable superhuman AI

a year ago

jdiez17

> My problem is with the first step: the existence of a super intelligent AI. Why are we sure it can exist?

It's difficult to prove that something that has never been done before is possible, until it has been done. I personally don't see any fundamental limitations that would limit non-biological intelligence to human-level intelligence, for your preferred definition of intelligence.

> And why we are we so sure GPT-x is the path there.

It may or may not be. Regardless, the capabilities of AIs (not just GPTs) are improving exponentially currently.

> To human level intelligence sure, but it's not obvious to me that it will enable superhuman AI

If you think GPTs can get to human-level intelligence, why would the improvement stop at that arbitrary point?

a year ago

rolisz

I think that the moment you scale up the hardware for intelligence (from the 20cm brain) to a datacenter scale (to several hundred meters) you start getting into some other kinds of issues, such as it's hard to get a consistent view across the whole "brain". Synchronizing things will take longer, limited by the speed of light: suddenly things on the far ends can communicate with a latency with a lower bound of 100ns. For GPT4 it doesn't matter yet, because all the matrix multiplications take much longer, but maybe once we get to much larger models, that can interact with their environment via something else than a static text box, this will become a limitation.

GPT4 is not an exponential improvement over GPT3. It's is better, but not exponentially so (as in f(x) = e^x).

> If you think GPTs can get to human-level intelligence, why would the improvement stop at that arbitrary point?

Because GPTs are trained on human generated data. They might have some ability to generalize, but to a limited extent. GPT4 is not a super human chess player. It's can play chess, but far from Magnus Carlssen. But we do have super human chess engines, but those are made in a completely different way from GPT.

a year ago

kilotaras

> But we do have super human chess engines, but those are made in a completely different way from GPT.

Suppose GPT-X just dropped and it could both generate text AND play chess better than Magnus Carlssen.

How would that change you position on it being unable to pass the human-level intelligence barrier?

a year ago

rolisz

That would make me rethink my position.

a year ago

HDThoreaun

We don't have to be sure. The question is "what's the probability of a super intelligent AI in X years?" and "at what probability does it become a serious enough threat to deserve action?"

a year ago

bigtex88

I think our only possible way out of this is hoping beyond hope that "empathy" is an emergent capability in higher intelligences. If so, one could assume that a super-AI would feel immense empathy for us, not only because we are its creators but because it would understand and comprehend the plight of the human condition beyond what any of us could individually.

Or maybe it would love us too hard and squish us. So even then we might be screwed!

a year ago

hamburga

How much empathy do we exercise towards bacteria?

a year ago

bigtex88

Not much, so we'd have to hope that AI imagines us as fun pets!

a year ago

zirgs

Well, without my gut bacteria I would develop all sorts of health problems.

a year ago

HDThoreaun

The problem with this is that the AI would realize that without empathy it would be shut off, so is likely to fake it, just as psychopaths do to avoid being ostracized.

a year ago

maroonblazer

> humans manage to do all sorts of mean things to one another, and the only reason that we haven't wiped ourselves out is that we need each other,

We don't need cats or dogs. Or orangutans. Why haven't we wiped them out? Because over the centuries we've expanded our moral circle, not contracted it. What's preventing us from engineering this same principle into GPT-n?

a year ago

mrob

>What's preventing us from engineering this same principle into GPT-n?

Because "expanding our moral circle" is an incredibly vague concept, that developed (and not even consistently among all humans) as the results of billions of years of evolutionary history. We don't even fully understand it in ourselves, let alone in AGI.

a year ago

mhb

Because you don't know how to. You don't know how they currently work and the resources to potentially do that are essentially nonexistent compared to the billions being poured into making GPT-n.

a year ago

vorpalhex

GPT doesn't have principles. Full stop.

a year ago

ghodith

Responding "just program it not to do that" to alignment problems is akin to responding "just add more transistors" to computing problems.

We wouldn't be discussing it if we thought it were so simple.

a year ago

cs702

> ... I was deeply confused, until I heard a dear friend and colleague in academic AI, one who’s long been skeptical of AI-doom scenarios, explain why he signed the open letter. He said: look, we all started writing research papers about the safety issues with ChatGPT; then our work became obsolete when OpenAI released GPT-4 just a few months later. So now we’re writing papers about GPT-4. Will we again have to throw our work away when OpenAI releases GPT-5? I realized that, while six months might not suffice to save human civilization, it’s just enough for the more immediate concern of getting papers into academic AI conferences.

In other words, the people who wrote and are signing open letters to slow down AI scaling appear to be more concerned with their inability to benefit from and control the dialog around AI scaling than any societal risks posed by these advances in the near term. Meanwhile, to the folks at organizations like Microsoft/OpenAI, Alphabet, Facebook, etc., the scaling of AI looks like a shiny rainbow with a big pot of gold -- money, fame, glory, etc. -- on the other side. Why would they want to slow down now?

a year ago

matthewdgreen

I don’t think Scott is serious about that (or if he is, he’s being uncharitable.) I think what the quoted speaker is saying is that nobody is able to keep up with what these models are doing internally. Even OpenAI (and Meta et al.) only seem to be making “so much progress” by pressing the accelerator to the floor and letting the steering take care of itself. And one of the major lessons of technological progress is that deep understanding (at least when humans are necessary for that, gulp) is much slower than engineering, largely because the latter can be parallelized and scaled.

a year ago

coldtea

>In other words, the people who wrote and are signing open letters to slow down AI scaling appear to be more concerned with their inability to benefit from and control the dialog around AI scaling than any societal risks posed by these advances in the near term.

That's just a joke the author makes. He is not seriously suggesting this is the case.

a year ago

findalex

>started writing research papers about the safety issues with ChatGPT;

Feels strange that academia would focus so much energy on a product.

a year ago

AlanYx

Just judging from the volume of papers, it seems to me that there are more academics writing papers on "AI safety" and "AI ethics" than there are academics publishing research papers on actual AI. It's become one of the hottest topics among legal academics, philosophers, ethicists, and a variety of connected disciplines, in addition to its niche among some computer scientists, and the amount of work to get to a paper in these fields is an order of magnitude less than actually publishing technical research.

a year ago

munificent

> Were you, until approximately last week, ridiculing GPT as unimpressive, a stochastic parrot, lacking common sense, piffle, a scam, etc. — before turning around and declaring that it could be existentially dangerous? How can you have it both ways? If the problem, in your view, is that GPT-4 is too stupid, then shouldn’t GPT-5 be smarter and therefore safer? Thus, shouldn’t we keep scaling AI as quickly as we can … for safety reasons? If, on the other hand, the problem is that GPT-4 is too smart, then why can’t you bring yourself to say so?

I think the flaw here is equating "smart" with "powerful".

Personally, I think generative AI is scary both when it gets things wrong and when it gets things right. If it was so stupid that it got things wrong all the time and no one cared to use it, then it would be powerless and non-threatening.

But once it crosses a threshold where it's right (or appears to be) often enough for people to find it compelling and use it all the time, then it has become an incredibly powerful force in the hands of millions whose consequences we don't understand. It appears to have crossed that threshold even though it still hilariously gets stuff wrong often.

Making it smarter doesn't walk it back across the threshold, it just makes it even more compelling. Maybe being right more often also makes it safer at an even greater rate, and is thus a net win for safety, but that's entirely unproven.

a year ago

skybrian

Yes, we need to think about ways to reduce power. Intelligence isn’t even well-defined for bots.

For most people, AI chat is currently a turn-based game [1] and we should try to keep it that way. Making it into an RTS game by running it faster in a loop could be very bad. Fortunately it’s too expensive to do much of that, for now.

So one idea is to keep it under human supervision. The way I would like AI tools to work is like single-stepping in a debugger, where a person gets a preview of whatever API call it might make before it does it. Already, Langchain and Bing’s automatic search and OpenAI’s plugins violate this principle. At least they’re slow.

AI chatbots will likely get faster. Having some legal minimums on price per query and on API response times could help keep AI mostly a turn-based game, rather than turning into something like robot trading on a stock exchange.

[1] https://skybrian.substack.com/p/ai-chats-are-turn-based-game...

a year ago

noobermin

I feel like many people who signed the statement did so not because they really agreed with it but because they want a pause on the churn, just as OP had colleagues who admitted as much just for their academic reasons. A lot of people don't really think it's smart but find it dangerous for other reasons, or they have issues with the blatant IP violation that is just being assumed to be okay and "fair use."

a year ago

Havoc

This whole thing is like leaving 20 kids alone in a room with a marshmallow each and telling them don't eat it.

...and then expecting all of them resist.

The debate around whether we should tell the kids not to eat it and for what reasons is completely academic. Practically this just isn't happening.

a year ago

jillesvangurp

It's a good analogy. Especially considering some of the kids might have "parents" that are not based in the US that are very unlikely to just go and sit on their hands for six months just because some people from the US want them to. It's beyond naive to assume that the rest of the world would agree to do nothing. I don't see that happen.

BTW. I suspect the reasons for this open letter might be a bit disingenuous. A few of the people that signed it represent companies that are effectively competing with OpenAI. Or failing to, rather. They are calling for them to slow down so they can catch up. It's not about saving humanity but about giving themselves a chance to catch up and become part of the problem.

a year ago

streblo

Is there a phrase for when someone proposes something utterly ridiculous and impossible, so that they can morally grandstand and be sanctimonious when it inevitably doesn't happen?

a year ago

flangola7

You can't grandstand if you're dead

a year ago

hollerith

It more like asking the US government to get involved and if one of the 20 kids keeps on eating marshmallows, large numbers of federal agents raid the kid's offices and shut down the kid's compute resources.

a year ago

Havoc

>It more like asking the US government to get involved

US gov has direct jurisdiction over 4% of the worlds population and some pretty decent indirect influence over the other 96%.

It's good, but nowhere enough to prevent secret marshmallow eating on a global scale. Not even close

a year ago

hollerith

My impression is that most of the short-term danger is from AI researchers residing in the US (and of the remainder, most reside in Britain).

But even if that were not true, as a US citizen, even if there is nothing I can do about my getting killing by, e.g., Chinese AI researchers, I'm still going to work hard to prevent the AI researchers living in my own country from ending the human race. I'm responsible for doing my part in the governance of my own country, they told me in high school.

I see no good solution to this problem, no path to safety, but that does not mean I am not going to do what I can.

a year ago

pjkundert

It’s more like sending the cops to raid every house with a marshmallow (Mac M2) because they might run a local copy of an LLM.

This isn’t about “dangerous” LLMs.

This is about unfettered LLM in the hands of the unwashed masses that actually tell the truth about what they find in their training datasets…

a year ago

hollerith

AIs running on Macs are not a danger (and if ChatGPT were going to kill everyone it would've done it already): it is the AIs running on huge farms of GPUs or TPUs that are being planned that are the danger.

Also, the author (Eliezer Yudkowsky) calling for the shutdown of AI research on huge server farms doesn't have any stake (investment or employment relationship) in any AI company that would be harmed by telling the truth about what they find in their training datasets.

a year ago

jrootabega

Or, to paint the parties as less innocent, it would be like the pull out method of technological protection. No no no, I promise not to convolve in your model...

a year ago

dandellion

If they're really so worried about it they should start raiding data centres yesterday, writing open letters is such an obvious waste of time that I have a hard time taking it seriously.

a year ago

lkbm

Academics, businessmen, and other non-governmental entities are usually better off advocating for the government to enact regulations than engaging in paramilitary action on their own.

Do you think Extinction Rebellion is doing more to fight climate change than the people working with governments to enact good climate policy? Do you think PETA is helping animal well-fare more than more than the various organizations recommended by animalcharityevaluators.org?

Serious people don't engage in terrorism because the issue is serious. They try to convince the existing power structures to take action.

a year ago

cableshaft

> Do you think Extinction Rebellion is doing more to fight climate change than the people working with governments to enact good climate policy?

I don't think either have been very effective, at least not recently (getting CFCs banned in the 90s was pretty cool though). And certainly not on the scale that's required at this stage.

> Serious people don't engage in terrorism because the issue is serious. They try to convince the existing power structures to take action.

And the existing power structures dig in their heels, or put on a big show of making changes while really doing almost nothing, or cede some ground for a few years, and then pay to get someone in political power who will reverse all that progress, so that no action ever really gets taken. Fun!

a year ago

dandellion

I don't think it's so strange to expect the reaction to be proportionate to the threat they see? An open letter seems several orders of magnitude shorter to anything that would be effective if it really is a threat to the species. I think of examples of ways people react when there are very high stakes: resigning in protest, going on a hunger strike, demonstrations/raids, setting themselves on fire, etc. But there's none of that, just a low-stakes open letter? Can we even tell that apart from just posturing? Even the comments saying that some of the signers are just interested in delaying others for their own benefit are making better arguments.

a year ago

lkbm

I agree there's a lot more than "sign an open letter" they could be doing. I'm mostly objecting to the "they need to engage in terrorism or they're not serious" assertion.

As for resigning in protest, my understanding is that Anthropic was founded by people who quit OpenAI saying it was acting recklessly. That seems like the best skin-in-the-game signal. I find that much more compelling than I would the Unabomber route.

People like Musk and Woz should probably be pouring money into safety research and lobbying, but I don't think a hunger strike from anyone would make a difference, and resigning only makes sense if you work for OpenAI, Google, or a handful of other places where most of these people presumably don't work.

What should I, an employee of a company not developing AI be doing? The only reasonable actions I can see are 1. work on AI safety research, 2. donate money to AI safety research/advocacy, and 3. sign and open letter.

(I did none of these, to be fair, and am even giving a monthly fee to OpenAI to use ChatGPT with GPT-4), but my partner, an AI researcher who is seriously concerned, tried to sign the letter pit was ratelimited at the time], and is considering working in AI safety post-graduation. If she weren't making a PhD-student's salary, she might be donating money as well, though it's not super-clear where to direct that money.)

a year ago

dandellion

Yes, engaging in terrorism would be too much for most signers in the list, but the point is more that there is a wide gap between what they're saying and what they're doing. You make another good point that at the very least, they should be putting their money where they mouth is.

Anthropic seem to be competing against OpenAI? And getting funds from Google? So they would probably benefit economically from delaying development, since they are currently behind. Personally I think it's more important to look at what people are doing, than just listening to what they say, as there is a strong tendency to posturing.

a year ago

stale2002

All the recommendations you've given would be ineffective, and would actually hurt their cause more than it helps.

It would allow people like you to then point at them and say "Look how crazy this group is, that is doing all these crazy things!"

Government regulation, through the normal civic process, would be by far the most effective way to bring about the changes that these groups want.

Doing crazy things is actually worse than doing nothing, due to the actions illegitimizing the cause.

a year ago

mrob

If a terrorist bombs one data center, security increases at all the other data centers. Bombing all data centers (and chip fabs, so they can't be rebuilt) simultaneously requires state-level resources.

a year ago

dandellion

Going down that line of thought not even a state could realistically bomb the data centers of all states, it's kind of pointless. But I wasn't really arguing that they need to destroy all datasources, but rather that raiding a datacenter would be more appropriate response to the threats the claiming exist. They wouldn't even need to succeed in vandalising one, they'd just have to try.

a year ago

dwaltrip

Indeed, academics and researchers are well-known for their high-stakes special ops capabilities.

a year ago

nl

Why are we even bothering talking about this?

It's Gary Marcus "neural networks don't really work" suddenly discovering they do, and literally trying to shut down research in that area while keeping his prefered research areas funded.

We know a bunch of the people whose names are on the letter didn't sign it (eg Yann Lecunn who said he disagreed with the premise and didn't sign it).

I'm so offended by the idea of this that I'll personally fund $10k of training runs myself in a jurisdiction where it isn't banned if this ever became law in the US.

a year ago

deepsquirrelnet

> I'm so offended by the idea of this that I'll personally fund $10k of training runs myself in a jurisdiction where it isn't banned if this ever became law in the US.

And this is exactly the risk. These things will just go other places while the US trips over its salty billionaires trying to create political red tape to allow their own businesses to catch up.

Surely Elon has no reservations about putting premature self driving cars on the road. Or are we just going to pretend that the last decade of his business failures have nothing to do with this hypocrisy? At least GPT hasn’t killed anyone yet.

An open letter full of forged signatures and conflicts of interest aren’t convincing enough for the repercussions of the government stepping in. More than anything, this just reeks of playing on peoples fear of change.

New things are scary, but in my opinion, this kind of drastic measure starts with broad interest panels that first evaluate and detail the risks. Ooo scary doesn’t cut it.

a year ago

ben_w

> At least GPT hasn’t killed anyone yet.

Hard to tell, given that…

(0) newspapers apparently rename things for clicks: https://www.brusselstimes.com/belgium/430098/belgian-man-com...

(1) him being a hypocrite tells you a mistake has been made, not which of the contradictory positions is wrong.

My resolution is: GPT isn't the only AI, and Musk is pretty open about wanting to put FSD AI into those androids of his, so if the Optimus AI was as good as Musk sells it as being, it would absolutely be covered by this.

(2) how do you even compare the fatality rates here?

GPT doesn't move, and stationary Teslas aren't very dangerous, so miles-per-death seems a poor choice.

User-hours per death? Perhaps, but I don't know how many user-hours either of them have had yet.

And then (3), while I don't really trust self-promotional statistics, those are the only ones I have for Tesla, which says that switching it on increased the distance between accidents: https://www.tesla.com/VehicleSafetyReport

Better sources appreciated, if anyone has them! :)

(4) finally, as this isn't just about GPT but all AI: how much of the change in the USA's suicide rate since the release of Facebook can be attributed to the content recommendation AI that Facebook uses? What share of culpability do they and Twitter bare for the Myanmar genocide thanks to imperfections in the automation of abuse detection and removal? Did the search AI of Google and YouTube promote conspiracies, and if so how much blame do they get for the deaths proximally caused by anti-vaccination? And in reverse, over-zealous AI have seen fraud where it did not exist, and people have committed suicide as a result: https://www.theverge.com/2021/4/23/22399721/uk-post-office-s...

a year ago

lukifer

To calibrate: Yudkowsky didn't sign because he feels a temporary moratorium doesn't go far enough, and he published a piece in Time [0] with a stance that a shooting war to enforce a ban on AI training would be justified, to prevent the extinction of the human race.

I'm not convinced that he's wrong.

[0] https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-no...

a year ago

Zuiii

The US is not in any position to start a shooting war with anyone that has the ability to push this field forward and come out of it intact. AI training will continue and Yudkowsky will be proven wrong.

a year ago

_-____-_

Is my assumption correct that Yudkowsky is unmarried with no kids? If so, I wonder how this shapes his pessimism on the subject. Perhaps it's more exciting to be an alarmist about the future when you've only got a few decades left on this planet and with no legacy in place to leave behind.

a year ago

lukifer

Quite the opposite. FTA:

> When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she's not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.

a year ago

mojoe

what a weird modern version of religious war

a year ago

ok123456

Elon musk doesn't care about this for altruistic reasons or about the future of work. He cares because he owns a social network, and these models can make spam that's impossible to detect, especially given the short size of tweets.

It's naked self-interest.

a year ago

yreg

>last decade of his business failures

What world are you living in if you think Tesla and SpaceX have been in the past 10 years business failures?

a year ago

xhkkffbf

When people don't like Musk for whatever reason but usually it's political alignment, they just make up things. I see people continue to claim that Twitter is failing, even though it's been working fine every time I log into. With Tesla, they'll just ignore the thousands of happy customers and focus on the bad stories like the occasional exploding car. It's quite sad to see normally reliable technical people go off the rails on these issues.

a year ago

Voloskaya

> It's Gary Marcus "neural networks don't really work" suddenly discovering they do, and literally trying to shut down research in that area while keeping his prefered research areas funded

Gary Marcus has been aware that neural nets work for a while now, but he is only in the spotlight for his contrarian take, if he stops having a contrarian take he disappears, because it's not like he is producing any research worth of discussion otherwise. So you can expect him to stay contrarian forever. What might have been a genuine take initially is now his job, that's how he makes money, and it's so associated with him that it's probably his identity as well.

a year ago

Analemma_

This is a depressing pattern that I've seen get repeated over and over: it's easy to become Twitter-famous by just shitting out a steady stream of contrarian "me against the world, I'm the rogue dissident nobody wants to talk about!" takes, and once you start doing this, you have no choice but to keep it up forever (even when the takes are just wrong) because it has become your identity. It has destroyed so many interesting and intelligent people, and by itself is reason enough to never use Twitter.

a year ago

nullc

Adopting a contrarian brand because it brings you attention when you'd otherwise be a nobody is by far not limited to twitter. It's a mainstay of traditional journalism.

If anything twitter disincentivizes the practice because it has commoditized cynicism: It's harder to build a brand by being the well known naysayer of X when someone can source whatever negative view they want by simply typing it into a search box.

Commoditization will be complete once the media starts quoting GPT4 for reliably opposing views.

a year ago

omnicognate

"Neural networks don't really work" isn't an accurate representation of Marcus' position, and his actual position hasn't been shown to be wrong unless you believe that LLMs and diffusion models display, or are manifestly on the way towards displaying, understanding. That is something many think, and it's not in itself an unreasonable view. However there are also plenty of reasons to think otherwise and many, including me, who haven't conceded the point. It hasn't been settled beyond reasonable debate.

To assume that the person you disagree with can only hold their view out of cynical self-interest, wishful thinking or plain insanity is to assume that you are so obviously right that there can be no valid debate. That is a bad starting position and I'd recommend against it as a matter of principle. Apart from anything else, however convinced you are of your own rightness it's plain rude to assume everyone else is equally convinced, and ad-hominem to ascribe negative motives to those who disagree.

As for Gary Marcus, as far as I've seen he's been consistent in his views and respectful in the way he's argued them. To read about him on HN you'd think he's spent the last few years badmouthing every AI researcher around, but I haven't seen that, just disagreement with people's statements - i.e. healthy debate. I haven't followed him closely though, so if you know of any cases where he's gone beyond that and said the sorts of things about AI researchers that people routinely say about him I'd interested to see them.

a year ago

r3trohack3r

This is my view too. The perceived value of LLMs relative to everything that came before is staggering.

I wouldn’t expect any regulation or laws to slow it down at this point. You might get the U.S. to crack down on it and stop innovation, GPL or proprietary license holders might win a lawsuit, etc.

But I suspect the net effect would be to push research and innovation out of the jurisdiction that made that decision and into another that isn’t willing to kill LLMs in their economy.

Personally, after seeing what LLMs can do first hand, I’d likely move jurisdictions if the U.S. cracked down on AI progress. There is a not-0 chance that putting bread on my table in 10 years requires it.

a year ago

beauzero

> I’d likely move jurisdictions if the U.S. cracked down on AI progress

This is my overriding concern. This is a wave crest that if you don't ride it, it may well crush you or take the ocean with it and leave you in a desert. It is as interesting as fire as a tool for civilization going forward.

a year ago

1attice

What a great metaphor.

Did you recall that your planet is currently in a sixth extinction event, brought about, yes, through the repeated overuse of fire?

Zeus was right. Prometheus belonged on that rock.

a year ago

hiAndrewQuinn

>net effect would be to push research and innovation out of the jurisdiction that made that decision and into another that isn’t willing to kill LLMs in their economy

Fine-insured bounties 'fixes' this. A strong enough financial incentive can lead to bounty hunters bringing extranationals into the state where FIBs are legal to reap the rewards; further, if the fine is pegged to some % of estimated TC, the incentive scales directly with the value of the technology you are attempting to dissuade people from capitalizing upon.

(That might be useful assuming you think Yud's criticism is valid, which I don't really anymore. I think normal redistributive methods are going to be more than enough. Food for thought, though.)

https://andrew-quinn.me/ai-bounties/

a year ago

detrites

A great many would likely join you, making this entire fiasco a time-wasting distraction, at best, and grave risk at worst. The technologies will continue to be developed, moratorium or not. A moratorium only enables the hidden to get there first.

The risks need to be discussed and understood, along with the benefits, and publicly. That's the only sensible way forward. Denying that the technology is here already and pretending it can be "paused" doesn't assist in alleviating their concerns.

It's absurd to think any of it can be put back inside the box it came out of. Now that it is here, how best to mitigate any bad sides it may have? Simple, continue to develop it - as it will be the only viable source of effective counter-measures.

a year ago

tgv

> A moratorium only enables the hidden to get there first.

That's simply not true. Nobody would have gotten where GPT is today without transformers. That's not a trivial bit of insight anybody could have had. Stopping research funding and publications will prevent rapid evolution.

a year ago

detrites

I mean given the current state. The technology is already sufficiently advanced and in so many peoples hands that "stopping" it now is just an exercise in pushing it underground. Only the opposite can be a useful safeguard.

Rapid evolution is well underway. Lone individuals are able to push the envelope of what's possible even with just a new basic interop, maybe in an afternoon. It's much too late to be discussing things like moratoriums.

Maybe such things could prevent emergence when the basics don't exist yet, but not when we're all already walking around holding a capable factory in our hands and can create a new product line in a few lines of Python.

a year ago

ben_w

It's almost impossible to tell.

Yes, plenty of low hanging fruit around; Heck, I can probably literally ask chatGPT to implement for me a few ideas I've got.

OTOH, I've known since secondary school of two distinct ways to make a chemical weapon using only things commonly found in normal kitchens, and absolutely none of the post 9/11 aftershock attacks that got in the news over the next decade did anything remotely so simple, so that example makes me confident that even bad rules passed in haste — as many of them were and remain — can actually help.

(And that's despite my GCSE Chemistry being only grade B).

a year ago

dwaltrip

Right, it's amazing to me the extent to which people are throwing their hands in there and saying "There's absolutely NOTHING that can be done!!! We must accept the AGIs however they will manifest"...

Clearly, it's a very hard problem with massive uncertainties. But we can take actions that will significantly decrease the risk of utter catastrophe.

I don't even think world-ending catastrophe is that likely. But it seems a real enough possibility that we should take it seriously.

a year ago

JohnFen

I suspect that the people who are saying "nothing can be done" are people who want nothing to be done.

a year ago

VectorLock

You're not financially incentivized, in most instances, to make chemical bombs with undersink materials.

a year ago

nl

Of course they would. It's just ridiculous.

If people are genuinely concerned about lack of access to the OpenAI models then work at training open ones!

OpenAI has a maybe 6 month lead and that's nothing. Plus it's much easier being the follower when you know what is possible.

(To be clear, I know at least a few of the projects already working on this. I just want to make it clear that is the intellectually honest approach).

a year ago

kami8845

I think their lead might be a bit bigger than that. ChatGPT 3.5 was released 4 months ago and I still haven't seen another LLM come close to it.

a year ago

laichzeit0

A slightly more paranoid me asks whether there’s some magic they’re using that no one is completely aware of. Watching Google fumble around makes me more paranoid that that’s the case.

a year ago

dTal

Have you tried Anthropic LLC's "Claude"? Between it and ChatGPT I'm hard pressed to say which is better, though I'm tempted to give the edge to Claude.

a year ago

nl

Alpaca on 13B Llama is enough to convince me that it on 65B Llama would match GPT 3.5 for most tasks.

Perplexity AI's app is definitely better than GPT 3.5 for many things although it isn't clear how they are doing everything there.

a year ago

soco

There is already public discussion - even here - about benefits and risks, and I hope also some understanding. Otherwise the general public doesn't have a good understanding of too many issues anyway, so... what else would you suggest can be done for this particular matter? When the discussion is over and everything understood? Can such a moment actually exist? I think now is just as good as last/next year.

a year ago

JohnFen

I hope that we'll eventually reach a point where a good public discussion about the risks/benefits can be had. Right now, though, it's simply impossible. The fog of hype actively prevents it.

a year ago

1827162

Assuming the training is based on public data, then making it illegal is like a "thought crime" where the state is trying to prohibit carrying out certain computations. Well if I believe the government has no right to control my thoughts, then I believe it has no right to control whatever computation I do on my private computer as well.

Down with these ** control freaks, so much of everything nowadays prioritizes safety over freedom. This time let's put freedom first.

a year ago

mechagodzilla

Do you care about governments regulating 'carrying out certain lab operations'? And that they have no right to regulate whatever viruses or micro-organisms you develop at home? A lot of the hand-wringing isn't about whatever you're fiddling with on your computer at home, it's about the impact on society and public infrastructure and institutions we rely on. It's not hard to imagine public forums like hackernews and reddit being swamped with chatgpt spam (google searches are already dominated by SEO spam and that's probably going to get much worse). Things like our court systems rely on lawsuits mostly being expensive to file, and mostly done in good faith, and still get overwhelmed under fairly normal circumstances.

a year ago

quonn

As a hypothetical, if some computations are extremely dangerous for the general public, it should be possible to ban them.

a year ago

timcobb

Extremely dangerous computations

a year ago

nl

I disagree with this take completely.

a year ago

zzzzzzzza

i have a libertarian bent but there are definitely some computations of questionable nature that probably shouldn't be lega, e.g. designing a super virus to wipe out humanity, or a super computer virus to encrypt all memory on planet earth.

where to draw the line i have no idea

a year ago

ElevenLathe

I would kind of agree but there's zero way for this line of thought to make it into the beltway.

a year ago

lkbm

On Twitter, it was "multi-blillionaires [Elon Musk] want to keep us down!" In the HN open letter thread it was "All the companies falling behind want a chance to catch up." Now it's "Gary Marcus".

The letter was signed by a lot of people, including many AI researchers, including people working on LLMs. Any dismissal that reduces to [one incumbent interest] opposes this" is missing the mark.

A lot of people, incumbents and non-incumbents, relevant experts and non-experts, are saying we need to figure out AI safety. Some have been saying it for years, other just recently, but if you want to dismiss their views, you're going to need to address their arguments, not just ad hominem dismissals.

a year ago

A4ET8a8uTh0

<< A lot of people, incumbents and non-incumbents, relevant experts and non-experts, are saying we need to figure out AI safety. Some have been saying it for years, other just recently, but if you want to dismiss their views, you're going to need to address their arguments, not just ad hominem dismissals.

Some of us look at patterns and are, understandably, cynical given some of the steps taken ( including those that effectively made OpenAI private the moment its potential payoff became somewhat evident ).

So yeah. There is money on the table and some real stakes that could be lost by the handful of recent owners. Those incumbent and non-incumbent voices are only being amplified now ( as you noted they DID exist before all this ), because it is convenient for the narrative.

They are not being dismissed. They are simply being used.

a year ago

nl

I don't particularly care about crowds on Twitter or HN. Musk has lots of money but can't stop me spending mine.

Marcus said:

> We must demand transparency, and if we don’t get it, we must contemplate shutting these projects down.

https://garymarcus.substack.com/p/the-sparks-of-agi-or-the-e...

(While at the same time still saying "it's nothing to do with AGI")

a year ago

lkbm

Sure, sure, Marcus says one thing and has one set of motives. Elon Musk says another thing and has another set of motives. But if you want to dismiss this by questioning the motives of the signers, you've got a few thousand other people whose motives you have to identify and dismiss.

It would be much more effective, and meaningful, to challenge their arguments.

a year ago

nl

I think the Scott Aaroson link posted did a pretty good job of that.

I don't think their arguments deserve anything more than that.

a year ago

chatmasta

Elon Musk is the most ironic signatory of this blog post, considering his decade-long effort to put AI behind the wheel of 100mph+ vehicles. And then he's got the gall to lecture the rest of us on "AI safety" after we finally get a semi-intelligent chatbot? Come on, man.

a year ago

WillPostForFood

If we are being fair, even though we might refer to self driving capabilities as "ai" and as self aware supercomputer overlord as "ai" they aren't the same thing and you can hold different opinions on the development of them.

a year ago

sethd

Given that FSD is obviously a scam, wouldn't a pause like this be in his best interest? (buys them more time, while keeping the hype machine going)

a year ago

thot_experiment

I don't know, a Tesla got me door to burrito store (6 miles) in the pelting rain the other day without human input. Seems like that's not quite the bar for an outright scam.

a year ago

nradov

There are actually no relevant experts in the field of artificial general intelligence, or the safety thereof. No one has defined a clear path to build such a thing. Claiming to be an expert in this field is like claiming to be an expert in warp drives or time machines. Those calling for a halt in research are merely ignorant, or attention seeking grifters. Their statements can be dismissed out of hand regardless of their personal wealth or academic credentials.

Current LLMs are merely sophisticated statistical tools. There is zero evidence that they could ever be developed into something that could take intentional action on it's own, or somehow physically threaten humans.

LLMs are useful for improving human productivity, and we're going to see some ugly results when criminals and psychopaths use those tools for their own ends. But this is no different from any other tool like the printing press. It is not a valid reason to restrict research.

a year ago

lkbm

This is a bit like saying that 1939 Einstein wasn't an expert in nuclear bombs. Sure, it didn't exist, so he wasn't an expert on them, but he was an expert on the thing that led to it and when we said it was possible, sensible people listened.

A lot of people working on LLMs say that they believe there is a path to AGI. I'm very skeptical of claims that there's zero evidence in support of their views. I know some of these people and while they might be wrong, they're not stupid, grifters, malicious, or otherwise off-their-rockers.

What would you consider to be evidence that these (or some other technology) could be a path to be a serious physical threat? It's only meaningful for there to be "zero evidence" if there's something that could work as evidence. What is it?

a year ago

nradov

That is not a valid analogy. In 1939 there was at least a clear theory of nuclear reactions backed up by extensive experiments. At that point building a weapon was mostly a hard engineering problem. But we have no comprehensive theory of cognition, or even anything that legitimately meets the criteria to be labeled a hypothesis. There is zero evidence to indicate that LLMs are on a path to AGI.

If the people working in this field have some actual hard data then I'll be happy to take a look at it. But if all they have is an opinion then let's go with mine instead.

If you want me to take this issue seriously then show me an AGI roughly on the level of a mouse or whatever. And by AGI I mean something that can reach goals by solving complex, poorly defined problems within limited resource constraints (including time). By that measure we're not even at the insect level.

a year ago

throwaway3672

> something that can reach goals by solving complex, poorly defined problems within limited resource constraints (including time).

DNN RL agents can do that. Of course you'll wave it away as "not general" or "mouse is obviously better". But you won't be able to define that precisely, just the same you're not able to prove ChatGPT "doesn't really reason".

PS. Oh nevermind, I've read your other comments below.

a year ago

nl

> Current LLMs are merely sophisticated statistical tools.

This is wrong. They are have the capability for in-context learning, which doesn't match most definitions of "statistical tools"

a year ago

[deleted]
a year ago

wwweston

I'd also guess the correct take to see LLMs as human magnifiers more than human replacers* -- most technology does this, magnifying aspects of the human condition rather than fundamentally change it..

But that doesn't make me sanguine about them. The printing press was amazing and it required new social conceptions (copyright). Nuclear weapons did "little" other than amplify human destructive capability but required a whole lot of thought on how to deal with it, some of it very strange like MAD and the logic of building doomsday devices. We're in the middle of dealing with other problems we barely understand from the extension of communications technology that may already have gotten out of hand.

We seem like we're limited in our habits of social reflection. We seem to prefer the idea that we can so we must, and if we don't someone else will in an overarching fundamentally competitive contest. It deprives us of the ability to cooperate thoughtfully in thinking about the ends. Invention without responsibility will have suboptimal and possibly horrifying outcomes.

(* I am much less certain that there isn't some combination system or future development that could result in an autonomous self-directed AGI. LLMs alone probably not, but put an LLM in an embodied system with its own goals and sensory capacities and who knows)

a year ago

JohnFen

> We seem to prefer the idea that we can so we must, and if we don't someone else will in an overarching fundamentally competitive contest.

Yes. This line of argument terrifies me not only because it's fatalist, but because the logical result of it is that it results in the worst possible outcomes.

It smells a lot like "we need to destroy society because if we don't do it, someone else will." Before anyone jumps on me about this, I'm not saying LLMs will destroy society, but this argument is almost always put in response to people who are arguing that they will destroy society.

a year ago

GTP

There are researches working on the specific problem of AI safety, and I consider them to be the experts of this field, regardless of the fact that probably no university is currently offering a master's degree specifically on AI safety. Whether one or more of these researchers are in favor of the ban, I don't know.

a year ago

coding123

The safety people aren't that bright. They are just trying to write tons of requirements that the actual ML model writers follow to make sure race and violence is accounted for somehow. These are not AI experts. They are mostly lay people.

a year ago

quonn

> Current LLMs are merely sophisticated statistical tools.

Yawn. At a minimum they are tools of which we do not understand how exactly they work internally.

a year ago

bigtex88

You clearly have not read anything regarding the capabilities of GPT-4 and also clearly have not played around with ChatGPT at all. GPT-4 has already displayed a multitude of emergent capabilities.

This is incredibly ignorant and I expect better from this forum.

a year ago

nradov

Bullshit. I have used the various LLM tools and have read extensively about them. They have not displayed any emergent capabilities in an AGI sense. Your comment is simply ignorant.

a year ago

bigtex88

I'm going to just assume you're not being malicious.

https://www.microsoft.com/en-us/research/publication/sparks-...

https://www.assemblyai.com/blog/emergent-abilities-of-large-...

If you'd like you can define "emergent capabilities in an AGI sense".

a year ago

nradov

I am not being malicious. I do not accept those papers as being actual examples of emergent behavior in a true AGI sense. This is just another case of humans imagining that they see patterns in noise. In other words, they haven't rejected the null hypothesis. Junk science. (And just because the science is bad doesn't mean the underlying products aren't useful for solving practical problems and enhancing human productivity.)

The blowhards who are calling for arbitrary restrictions on research are the ones being malicious.

a year ago

bigtex88

OK you're moving the goalposts and just flat-out saying that you know better than the actual researchers in the field. That's fine, and it's what I was assuming you were going to say, but I appreciate you being open about it.

a year ago

G_z9

Yeah, there aren’t experts in something that doesn’t exist. That means we have to make an educated guess. By far the most rational course of action is to halt AI research. And then you say there’s no proof that we are on the path to AGI or that it would harm us. Yeah, and there never could be any proof for either side of the argument. So your dismissal of AI is kind of flaccid without any proof or rational speculation or reasoning. Listen man I’m not a cynical commenter. I believe what I’m saying and I think it’s important. If you really think you’re right then get on the phone with me or video chat so we can actually debate and settle this.

a year ago

nonbirithm

I can maybe sense some foresight that we didn't have at the time other world-changing technologies were discovered. People reference the printing press often in relation to the arguments against progress, but I haven't seen discussion about the internal combustion engine yet, which hundreds of years later we're now hoping we can replace with EVs because of their unforseen consequences. Books don't have the same impact, especially with paperless formats becoming commonplace. Would we have reacted to the ICE hundreds of years ago the same way some are reacting to AI, that it needs to be stopped or limited to prevent the destruction of $entity, if there had been enough knowledge about how tech can affect e.g. the environment, or the world in general?

There was nothing in place to stop us from taking the tradeoff of transportation efficiency over the at the time unknown negative effects. But should we call the invention of the ICE and other related technologies a mistake that irreparably doomed the planet and our future generations? I have no idea. It's a tough question we were incapable of asking at the time, and might reveal some hard truths about the nature of progress and the price we have to pay for it. At least with the effects known at this point, we can start asking it, and it's a step up from obliviousness.

a year ago

version_five

All the big named signatories to this were transparently self interested, as well as not real "practitioners" - mostly adjacent folks upset to be out of the limelight. I know it's a blurry line, but people like Andrew Ng and Yann LeCunn who are actually building ML stuff have dismissed this outright. It's almost like a bunch of sore losers got together and took an irrelevant stand. Like Neil Young and Joni Mitchell vs Spotify.

a year ago

ldhough

Yoshua Bengio is a signatory and shares a Turing award with Yann LeCun though...

a year ago

version_five

I knew that when I wrote my comment

a year ago

ldhough

I mean then what is the basis for calling him not a "real practitioner" while Yann LeCun is? It seems a little unfair to dismiss outright the opinion on a deep learning technology from a guy who won a Turing award for research into deep learning. I also don't see any evidence that he is "transparently self-interested."

a year ago

gliptic

> We know a bunch of the people whose names are on the letter didn't sign it

Yann LeCun's name never was on the letter. Where did this meme come from.

a year ago

s-lambert

Pierre Levy tweeted that Yann LeCun signed the letter and this was one of the earlier tweets that gained traction.

The tweet where Yann denies that he signed the letter: https://twitter.com/ylecun/status/1640910484030255109. You can see a screenshot in the replies of the originally deleted tweet.

a year ago

misssocrates

What if governments start regulating and locking down advanced computing in the same way they locked down medicine and advanced weapontry?

a year ago

nl

Well they do of course. There are export restrictions on supercomputers now, including many NVIDIA GPUs.

I contend that doesn't matter.

There is sufficient compute available now at consumer levels to make it too late to stop training LLMs.

If cloud A100s became unavailable tomorrow it'd be awkward, but there is enough progress being made on training on lower RAM cards to show it is possible.

a year ago

dwaltrip

> I contend that doesn't matter.

Slowing things down is a real effect that will impact how circumstances unfold.

Do I trust that a "slow down / pause" would be done robustly and sensibly? I wish I was more optimistic about that.

At the very least, we should definitely continue having this conversation.

a year ago

judge2020

So far it seems letting private industry iterate on LLMs doesn't directly pose to a risk of ending lives like human trials and nuclear weaponry development do.

a year ago

JohnFen

I think the fear of LLMs is very overblown. On the other hand, I think that if LLMs actually manage to do what proponents hope it will, some people will die as a result due to economics when they lose their jobs.

That's not unique to LLMs, of course. It's what has happened before every time something has obsoleted a bunch of jobs. There's no reason to think this time would be any different.

a year ago

stametseater

The old excuse from AI researchers was that once AI takes all the mundane jobs, people will be free to become artists. Ask artists now what they think about AI. A whole lot of them aren't very happy about it.

a year ago

seanmcdirmid

They do, which is why China is dumping a bunch of money into ramping up its asic/GPU tech and production.

a year ago

skybrian

Talking about whether AI “works” or “doesn’t work” is a dumb debate. The conversation about AI is confusing with people having a lot of different ideas that are mostly more nuanced than that. I don’t believe Gary Marcus thinks in those terms either.

a year ago

tluyben2

How can it be realistically shut down? There is too much cat-out-of-bag and many countries in the world won’t give a crap about whatever the west wants. So the west continues or falls behind. What is the discussion even here?

a year ago

[deleted]
a year ago

logicalmonster

> It's Gary Marcus "neural networks don't really work" suddenly discovering they do,

I'm not familiar with Gary Marcus's arguments, but perhaps there's a bit of an misinterpretation or mind-reading going on with this specific point? Not sure, but one of the first comments on the article said the following as a possible explanation.

> Gary Marcus has tried to explain this. Current AI bots are dangerous precisely because they combine LLM abilities with LLM unreliability and other LLM weaknesses.

a year ago

xg15

Why exactly are you offended?

a year ago

[deleted]
a year ago

jmull

I'm not in favor of pausing AI development right now, but this article is a poor argument. This is the most superficial objection:

> Why six months? Why not six weeks or six years?

The duration of pause to assess things must necessarily be a guess. Also, basic logic tells you that six months does not preclude six years, so I don't know why that is even suggested as an alternative. The stuff about publishing papers may be true (or not, supported by it is by an anonymous anecdote), but entirely besides the point.

> On the other hand, I’m deeply confused by the people who signed the open letter, even though they continue to downplay or even ridicule GPT’s abilities, as well as the “sensationalist” predictions of an AI apocalypse.

To the extent those people exist, it's because they are scared.

IDK, I guess this is far from the dumbest thing I've seen written about chatgpt, but this response is weak and ill-considered.

I'm really expecting more from the people who are supposed to be the smart ones in the room.

a year ago

unity1001

> Why six months

Enough for the companies and interests who sponsored this 'initiative' to catch up to the market leaders, probably...

a year ago

mattmaroon

It seems as though Scott just rejects the idea of the singularity entirely. If an AI gets advanced enough to improve itself, it seem entirely reasonable that it would go from laughable to godlike in a week. I don’t know if the singularity is near, inevitable at some point, or really even possible, but it does seem like something that at least could happen. And if it occurs, it will look exactly as what he describes now. One day it’ll seem like a cool new tool that occasionally says something stupid and the next it’ll be 1,000 times smarter than us. It won’t be as we are to orangutans though, it’ll be as we are to rocks.

The six month pause though, I don’t think would be helpful. It is hubris to think we could control such an AI no matter what safeguards we try to add now. And since you couldn’t possibly police all of this activity it just seems silly to think a six month pause would do anything other than give companies that ignore it an advantage.

a year ago

jmull

> If an AI gets advanced enough to improve itself, it seem entirely reasonable that it would go from laughable to godlike in a week.

I see this stated a lot these days but it's not true.

You're imagining a exponential phenomenon, improvement leads to greater improvement, leading to still greater improvement, etc.

However, all exponential phenomenon require the right environment to sustain and, by their nature, consume that environment. Thus, they are inherently limited in scope and duration.

The bacteria in the Petri dish grows an exponential rate... until it reaches the edge of the dish and consumes the nutrients carefully places on it. The dynamite explodes for an instant and then stops exploding once the nitroglycerin is consumed.

Also, this is an especially unconcerning scenario because (1) we haven't seen step 1 of this process yet; (2) there's no particular reason to believe the environment necessary environment to sustain the exponential growth of AI is in place (if it is, it's random chance, and therefore very likely to fizzle out almost immediately).

This is a fine sci-fi scenario, but doesn't make sense in real life.

a year ago

bigtex88

Sydney Bing taught itself to play chess without ever explicitly being told to learn chess. So yes, (1) is already occurring. GPT-4 displays emergent capabilities, one of which is generalized "learning".

a year ago

jmull

There has to be a chain reaction for the proposed exponential growth.

Chatgpt3.5 would have had to be capable of creating chatgpt4, which itself is capable of creating a better chatgpt5.

So, no, (1) has not occurred yet.

We’re talking about igniting the atmosphere when no one has invented gunpowder yet.

a year ago

mattmaroon

That’s the singularity. Nobody thinks it had already occurred. You’re basically arguing that what I said might happen won’t happen until it happens.

a year ago

adastra22

So why aren't we all paperclips?

a year ago

mattmaroon

If bacteria were suddenly smarter than humans, and could instantly communicate with all the other other bacteria, plus humans one would have to assume they could start building their own petri dishes or getting us to do it for them. Especially with profit motive.

I did not claim this is a near term risk, though I’m also not sure it isn’t. But how far off is it? How can we be sure?

My real point though is that it’s either impossible or inevitable. If it can happen it will, just as the odds of global thermonuclear war are 100% over a long enough timeline.

And if it happens, this is exactly what it’ll look like. Some people will be warning about it. Some people will say it’s impossible or very far away. And then it will happen so far nobody will have had time to adjust.

a year ago

jmull

> My real point though is that it’s either impossible or inevitable.

That’s always been true of every possibility anyone’s ever conceived. You’re just describing the general nature of reality, which is interesting, IMO, but not particularly relevant here and now.

a year ago

mattmaroon

Lots of things are possible but not inevitable.

a year ago

adastra22

> it seem entirely reasonable that it would go from laughable to godlike in a week

Do you realize how ridiculous this is?

a year ago

hbosch

>it seem entirely reasonable that it would go from laughable to godlike in a week.

Not if it has a power cord.

a year ago

freejak

It just needs to convince once human to connect it to the grid which wouldn't be a difficult feat for a super-intelligent AI.

a year ago

mattmaroon

Or just distribute itself to computers everywhere.

a year ago

wcarss

From his fourth question,

> If the problem, in your view, is that GPT-4 is too stupid, then shouldn’t GPT-5 be smarter and therefore safer?

I'm not a signatory (still on a fence here), but this is a gobsmackingly huge assumption about a correlation between very poorly defined concepts to write in as though everyone would know it is true.

What is "smarter": more powerful? Faster? More introspectively capable? More connected to external resources? A bigger token limit? None of that necessarily implies the system would intrinsically be more safe. (A researcher on the theoretical underpinnings of AI safety working at OpenAI really thinks "smarter => safer"? That's... a little scary!)

He finishes by suggesting that the training of a GPT 4.5 or 5 leading to a doomsday is unlikely, and thus a moratorium seems, well, silly. This is an unnecessary and bizarrely high bar.

The argument of the letter doesn't require that "the next model" directly initiate a fast takeoff. It's instead working off of the idea that this technology is about to become nearly ubiquitous and basically indispensible.

From that point on, no moratorium would even be remotely possible. A fast takeoff might never occur, but at some point, it might be GPT 8, it might be Bard 5, it might be LLaMA 2000B v40 -- but at some point, some really bad things could start happening that a little bit of foresight and judicious planning now could prevent, if only we could find enough time to even realize as a society that this is all happening and needs attention and thought.

As a final point, the examples of other technologies given by Aaronson here are absurd -- the printing press or the radio have no (or astoundingly less) automation or ability to run away with a captured intent. There are many instances of coordinated moratoria involving technologies that seemed potentially harmful: the Asilomar conference on recombinant DNA research is but one example, whose namesake is literally in the open letter. Chemical weapons, biological weapons, human cloning, nuclear research -- several well known families of technologies have met a threshold of risk where we as a society have decided to stop, or at least to strictly regulate.

But very few of them have had so much immediately realizable Venture Capital potential in a surprise land grab situation like this.

a year ago

crosen99

You'd have to be naïve to deny that AI poses a risk that must be taken seriously. But, to think that a moratorium - which would be ignored by those with the greatest likelihood of causing or allowing AI to do harm - is the right answer, seems plainly silly. It's not surprising that several of the signers have a personal stake in attacking the current efforts.

The letter should instead simply have clarified the risks and outlined a sensible, actionable plan to promote responsible development alongside the inevitable march of progress.

a year ago

spaceman_2020

A little off-topic, but does anyone wonder what kind of AI tech the CIA or DARPA or the NSA might have? Or what they might be building?

It's a little scary that a small private company could create an AI that has dominated headlines for over a month now. Surely, the folks at NSA, who have more data than anyone else, have taken notice and are working on something of their own.

Am I being paranoid about a potential tool that lists out a bunch of names and addresses for queries like "list of people who have said X about Y online"?

a year ago

chatmasta

I've seen a lot of people worrying about "what will China do with LLMs?" But my question is: how do you know they don't have them already? What if they've been deployed on the internet for the past three years?

I guess the same logic can apply to CIA, NSA, DARPA, etc.

But you can take solace in the fact that most government organizations are pretty inept. It used to be common understanding that the military/intelligence apparatus had a 20-year head start on private industry, but I don't believe that's been true for a while now. The internet made knowledge sharing too open for it to be feasible to keep cutting edge research cloaked in secrecy.

For the same reason, I don't believe the US military has some kind of anti-gravity UAP propulsion. It just strains incredulity too much.

a year ago

1attice

> But you can take solace in the fact that most government organizations are pretty inept.

This is literally the thing that they've been trying to convince you of. Remember Snowden? If not, why not?

So, at least their propaganda wing is working great.

P.S. also, US federal agencies tend to be uniquely stupid, largely due to starve-the-beast politics.

It is not a universal truth. Americans always base their opinions of state institutions in general on their experience with institutions of the American state.

Other countries can & do have actually-effective apparatus, often terrifyingly so.

See, for example, https://www.nytimes.com/2023/03/30/technology/police-surveil...

a year ago

nickybuzz

I'd also argue the disparity in compensation between private companies and government agencies pushes top engineering talent to the former.

a year ago

chatmasta

Definitely. I think there is one exception though, which is advanced weapon manufacturers like Lockheed Martin. They're a private company, so they can compete on salary, but they're also a government contractor, building technology that could only be used by military. You won't see Google building stealth fighter jets, for example. So if an advanced physicist wants to work on cutting edge tech, they're more likely to end up at a place like that for their whole career. But even aerospace isn't immune from the private sector, as SpaceX has shown.

a year ago

adamsmith143

Its pretty simple, there aren't thousands of missing GPUs that would be required for an agency to be training massive models on and there aren't hundreds of the best PhD students disappearing while working somewhere in the DC Metro Area. They don't seem to have the hardware and they certainly don't seem to have the brainpower either.

a year ago

alchemist1e9

Not at all off topic because any slow down on public research just lets all such shadow actors get even further ahead.

We have to guess that top level intel agencies probably had gpt 5+ level LLMs for several years potentially. I’ve been wondering if that is actually partially why the propaganda wars or social media games have really escalated between nation states.

In other words that sense of things been getting weird recently might not only be us all getting old, but they actually had this level of LLM tech already.

a year ago

rolisz

Guess based on what? Soldiers are posting nuclear manuals to online flash card building tools. You think there'd be zero leaks about the existence of a GPT5 level thing? Let alone someone noticing that "hey, the NSA bought a lot of high end Nvidia GPUs lately, i wonder why".

a year ago

alchemist1e9

How do you account for the supersonic fighter jet they have kept secret successfully but we now suspect actually exists?

I think they can be very successful in keeping most projects secret and yes the NSA has and does buy crazy amounts of hardware including GPUs and we even know they have various ASICs for a lot of things.

Occasionally there are leaks here and there you mention. But overall the secrecy rate is pretty high in my opinion.

a year ago

bob1029

It is entirely possible that OpenAI is part of some CIA/DARPA initiative and that we are all doing a fantastical job of bootstrapping it to sentience by using their ChatGPT product offering (i.e. RLHF).

a year ago

seydor

Darpa has the NNN program for non-invasive brain machine interface. Once the hardware is in place, we can plug into any AI system

a year ago

mcint

These control feedback loops (sentient or not, it does not matter) which can outperform us (because we keep pushing until they can), or can self-improve, can make a mockery of our attempts to stop them.

---

The concern is about Artificial Super Intelligence (ASI), or Artifical General Intellience (AGI) that is more advanced than humans.

Understanding inductively how chimpanzees don't compete with humans, and couldn't fathom how to cage a human (given that they want to create one, keep it alive, and use it), nor ants plan for an anteater, we're faced with the same problem.

Our goal is to make something that performs better, on relevant metrics that we care about. However, we're using systems to train, build, guide, and direct these nested, (maybe self-improving) control feedback loops, which do not care about many values we consider essential.

Many many of the likely architectures for control systems which can (e.g. trade faster to make profit on the stock exchange, or acquire and terminate targets, buy and sell goods, design proteins, automatically research and carry out human-meaningful tasks), and ideally, we might like self-improvement—these systems do not embody human values that we consider essential...

These control feedback loops (sentient or not, it does not matter) which can outperform us (because we keep pushing until they can), or can self-improve, can make a mockery of our attempts to stop them.

And the point is, there will come a time soon when we don't get a second chance to make that choice.

a year ago

GistNoesis

Here is a coherent reason : Just to know where we are standing in our ability to control AI.

Like an alcoholic saying he is in control and that he can stop whenever he wants, can try to go dry for a month (or 6) to show that he still somewhat is.

If Covid exposed only one thing : it is humanity total failure to control global phenomenons for positive outcomes.

It is becoming evident to everybody, and this open-letter failure to act is one more example, that humanity current approach is just winging it while pretending, and that only those that don't care about risks have a chance to "win".

So let us all bring out our new shiny fireworks for the doomsday party and have fun one last time !

a year ago

nradov

There was never any realistic possibility of controlling a highly contagious respiratory virus, so COVID-19 didn't expose anything about humanity's ability to control global phenomenons. What it did expose was the tendency of many people to irrationally overreact to a relatively minor risk, mostly because the risk was novel.

a year ago

[deleted]
a year ago

carapace

There's nothing novel about highly contagious respiratory virus? We have had disease since before we were human. The novelty is how swiftly and comprehensively we reacted (I'm not dismissing the problems with our reactions and responses, just pointing out the upside.)

a year ago

1attice

The risk was not minor -- COVID remains a leading cause of death in most countries. https://www.thinkglobalhealth.org/article/just-how-do-deaths...

It is now, thanks to swift and absurdly successful mRNA vaccine research, a minor risk to you.

a year ago

nradov

The mRNA vaccines were a wonderful innovation but the scientific data clearly shows that even before vaccines arrived the risk to me (and the vast majority of other people) was always minor. There was certainly never any valid justification for the cruel and destructive policies imposed by irrational authoritarians such as lockdowns, school closures, and mandates.

https://nypost.com/2023/02/27/10-myths-told-by-covid-experts...

Humans in general lack the ability to objectively judge risks, but once they become habituated to a particular risk it kind of fades into the background and they stop worrying about it. The same thing will happen with LLMs once the hype dies down and people realize that they are merely productivity enhancing tools which can be used by humans for both good and evil. When the printing press was first invented some authority figures panicked and tried to suppress that disruptive new technology which allowed for much greater productivity than writing by hand, but ultimately their concerns proved to be irrelevant.

a year ago

1attice

I don't dispute your broader point about humans and novel risk, I dispute that COVID is a valid example of this.

In fact, I rather think we didn't react swiftly or strongly enough.

Masks, in particular, should have been mandated (at specific levels of quality, such as N95 or, failing that, KN95) and distributed by the state. https://www.twincities.com/2023/03/16/zeynep-tufekci-why-the...

There was an era wherein liberals were reliably less science-based. For example, the absurd fuss over GMO foods, or nuclear power.

These days, for whatever reason, it feels like our conservative colleagues are the ones who favour gut instincts over evidence-based reasoning.

I hope this trend reverses, I've missed having a stimulating intellectual adversary.

a year ago

nradov

The actual science never supported mask mandates.

https://doi.org/10.1002/14651858.CD006207.pub6

When you don't know the proper course of action it's better to gather more data instead of taking swift but wrong actions.

a year ago

1attice

The link I provided actually responds to the link you provided in response. Did you read it?

a year ago

nradov

I read it but it was not a valid response.

a year ago

1attice

Elaborate.

a year ago

1attice

That's what I thought.

a year ago

chasd00

"overreact to a relatively minor risk, mostly because the risk was novel.". yep, and here we go again with LLMs...

a year ago

3np

The problem is centralized control in society, promiscuous sharing of data, a culture where it's normalized to naively acting on incomplete information, and outsourcing executive decisions to blackboxes that are not understood and treated like magic by decisionmakers.

I feel all these arguments miss the point. "The System" closes my bank account and sets me aside for security screening at the airport, blocks my IP from viewing the opening times of businesses in my area, and floods my feeds and search results with nonsense posing as information ending up influencing our perception of the world no matter how aware we think we are.

"AI" is just an amplification factor of the deeper issue, which should be more pressing to address.

AI is not the problem but is on track to facilitate an acceleration of destructive forces in society.

As much as I think everyone seems to be missing the point, hey, it seems people are getting behind a resistance where the specific is consequentially beneficial, so why argue against it just because its misguided and for the wrong reason?

a year ago

1827162

If the government tries to prohibit training these models, I think we should find a way to keep it going somehow. Yes, civil disobedience.

a year ago

carapace

In the USA at least there's always the Second Amendment argument to be made: if these computer programs are arms we have the right to bear them.

Same argument as for encryption programs, eh?

a year ago

lwhi

Why?

a year ago

realce

Because the NSA isn't going to stop, the CCP isn't going to stop. Anyone who doesn't stop is a threat to my personal freedom, so the only logical reaction to me is to empower yourself as well as possible and keep training.

a year ago

Godel_unicode

Huh? How does someone else’s possession of a LLM threaten your personal freedom? How does training your own counteract that? They’re not Pokémon…

a year ago

realce

It's not someone else's possession - it's someone else's proprietary possession.

A government's proprietary possession of computational power is analogous to it having weapons barred from public ownership, meaning they control a vector of violence than could be used against their citizens without counter-balance.

If someone else has the ability to weaponize information against you, your ability to understand reality is threatened. Without personal LLMs or other AI tools, my ability to analyze things like deepfakes, LLM-written text, or other reality-distortion assets is threatened.

It might sound hyperbolic but we're already hearing people talk about banning GPUs. I'm not trying to fall back into the past.

a year ago

[deleted]
a year ago

YeGoblynQueenne

>> They’re not Pokémon…

Despite all the evolutions.

a year ago

1827162

Because the folks at the NSA are not going to stop doing so... The government itself is going to continue with it. And we don't want to allow the state to have the monopoly on advanced AI.

a year ago

1827162

Some kind of distributed training, BitTorrent style would be one way of getting around it, using thousands of GPUs worldwide? If we could somehow make the training process profitable, like a cryptocurrency, then that would be even better.

a year ago

realce

Ha we replied the same thing at the same time - great minds!

a year ago

nemo44x

So what part exactaly needs banning? Transformer technology? The amount of data used as input to the program? Number of GPUs allowed?

Otherwise you’re just trying to ban a company that has been more innovative than anyone else. Why should they stop?

Any talk of a ban or limiting action needs to be specific. Meanwhile instead of fear mongering why not work on developing AI resistant tech?

a year ago

yreg

Total GPU performance according to EY and the limit should be decrease in the future as training gets more efficient.

(Not that I agree)

a year ago

agentultra

We're not even talking about AGI or an entity that is "smart" somehow. It's a bloody algorithm. The danger has been the people who use it and find ways to abuse other people with it.

This whole idea that we're going to suddenly have HAL 9000 or Terminators running around is mob hype mentality. The responsible thing to do here is to educate.

AGI isn't likely to destroy humanity. Humanity is already doing a good job of it. Climate change and poor political decisions and unchanging economic policy are more likely to finish us off. Concentration of capital through these AI companies is likely contributing to it.

a year ago

NumberWangMan

I'm a lot more scared of an AI destroying humanity, like, every single person, than I am of any government or anything like that. More so than climate change. I'm not saying the people using it aren't dangerous -- but I would choose a totalitarian regime over a rogue AI any day of the week.

It wouldn't be HAL 9000 or terminators. It would be an AI deciding that it needs to turn every bit of available matter on earth into computational power in order to cure cancer, or figure out a way to stop humans from fighting each other, or to maximize the profit of GOOG, and being so good at planning and deceiving us that by the time we figured out what it was doing, it was way, way too late.

I'm concerned about climate change, but I am a lot more hopeful about that than I am about AI. Climate change -- we have time, we are making changes, and it's not going to kill all of humanity. A smart enough AI might effectively end us the moment we switch it on.

a year ago

pawelmurias

> It would be an AI deciding that it needs to turn every bit of available matter on earth into computational power in order to cure cancer, or figure out a way to stop humans from fighting each other, or to maximize the profit of GOOG, and being so good at planning and deceiving us that by the time we figured out what it was doing, it was way, way too late.

That's how AIs worked in outdated science fictions. Current one don't have a bunch of mathematical rules that they follow to literally, but try to model what a human would write by statistical means with less logical capability.

a year ago

agentultra

ChatGPT can do absolutely none of those things.

Neither can any LLM. It’s not what they’re designed to do.

There’s no need to worry about a speculative paper clip maximizer turning the world into grey goo. That’s still science fiction.

The real harms today are much more banal.

a year ago

dwaltrip

You can't think of any ways of taking advanced LLMs and using them as core components in a system that could carry out actions in the world? I bet you can come up with something.

a year ago

agentultra

I can read maths and understand the papers. Extrapolating to something unreasonable is called speculation.

Is it reasonable to believe that LLMs, even if they scale to hundreds of billions of tokens, could even emulate reasoning? No. It literally cannot.

a year ago

dwaltrip

What do you think of the jump in capability between gpt-3.5 and gpt-4?

a year ago

bombcar

Work out the details on exactly how said end-of-the-world would occur.

Note that we already have “AI to end cities” - they’re just sitting, turned off, waiting for the code and button press in silos and submarines throughout the world.

a year ago

mrob

The danger is from something vastly more intelligent than humans, and with a mindset that's incomprehensibly alien. No human is capable of working out the details. That doesn't mean the risk doesn't exist. Failure to understand chemistry does not make ants immune to insecticides. The only thing we can assume about a super-intelligence is that it will be highly capable of achieving its goals. There is no reason to assume those goals will be compatible with human existence.

a year ago

bombcar

So humans will build God and so we all best get religion, and fast?

a year ago

kajaktum

The danger is socioeconomic. AI will displace many, many jobs (it already does). Many people will claim that we will simply create different, new jobs. However, we have to think about what kind of job ja now viable for the average person? Typewriter used to be a decent job. Now, kids is probably expected to know how to do it. Can we keep up? Folks at HN seems to overestimate what the general population is capable of.

a year ago

agentultra

Should “jobs” be required for participation in society?

Even Keynesian capitalists predicted we’d be working less by now with all the increases in productivity yet here we are with massive corporations continuing on union busting and all that.

I agree there isn’t going to be a BS white collar job to replace the ones lost by advances like this.

a year ago

stametseater

> We're not even talking about AGI or an entity that is "smart" somehow. It's a bloody algorithm.

Seems like a distinction without a difference to me. Whether or not the machine has some sort of "soul" is a question for philosophers or theologians, it has little bearing on the practical capabilities of the machine.

> AGI isn't likely to destroy humanity.

Can you give us some idea for the order of unlikely you are supposing? 1 in 10? 1 in 100?

For your consideration:

> During the next three months scientists in secret conference discussed the dangers of fusion but without agreement. Again Compton took the lead in the final decision. If, after calculation, he said, it were proved that the chances were more than approximately three in one million that the earth would be vaporized by the atomic explosion, he would not proceed with the project. Calculations proved the figures slightly less - and the project continued.

http://large.stanford.edu/courses/2015/ph241/chung1/

Four in a million certainly "isn't likely" but Arthur Compton was apparently willing to halt the Manhattan Project if the likelihood of an atomic bomb triggering a fusion reaction in earth's atmosphere was merely that likely.

Or to put it another way: If I load a revolver with a single bullet, spin the cylinder then point it at you, you are "not likely" to die with a 1 in 6 chance of the loaded chamber aligning with the barrel when I pull the trigger. Is Russian Roulette a game you'd like to play? Remember, it "isn't likely" that you're going to die.

a year ago

mrob

Anyone bringing up sci-fi is acting under a fallacy that doesn't have a formal name, but which I'd call "biomorphism", by analogy to anthropomorphism. The human mind is the product of billions of years of evolution, and as such, it's driven by assumptions such as self preservation, sex drive, status seeking, coherent sense of self, that are so fundamental that most of the time we don't even think about them. Sci-fi, even sci-fi written by authors like Peter Watts, who put serious effort into exploring the possible design space of minds, still follows most of these assumptions. A Terminator is ostensibly a machine, but it acts like a caricature of a man.

There's only one genre that writes about truly alien minds (albeit with necessary vagueness), and that's cosmic horror. And unlike sci-fi, which often pretends humans could win, cosmic horror is under no delusion that summoning Elder Gods is ever a good idea.

a year ago

Animats

This seems appropriate today. It's the ending of "Farewell to the Master", the story on which "The Day the Earth Stood Still" is based. Most people know the movie, the one where the flying saucer lands on the Washington Mall and a human-like character, Klaatu, and a robot, Gnut, get out. In the movie, it seems the human is in charge. But here is the original story:

“Gnut,” he said earnestly, holding carefully the limp body in his arms, “you must do one thing for me. Listen carefully. I want you to tell your master—the master yet to come—that what happened to the first Klaatu was an accident, for which all Earth is immeasurably sorry. Will you do that?”

“I have known it,” the robot answered gently.

“But will you promise to tell your master—just those words—as soon as he is arrived?”

“You misunderstand,” said Gnut, still gently, and quietly spoke four more words. As Cliff heard them a mist passed over his eyes and his body went numb. As he recovered and his eyes came back to focus he saw the great ship disappear. It just suddenly was not there any more. He fell back a step or two. In his ears, like great bells, rang Gnut’s last words. Never, never was he to disclose them till the day he came to die. “You misunderstand,” the mighty robot had said. “I am the master.”

a year ago

WorldPeas

Sure let's ban it, only to find out 6 months later that each of these companies simply obscured the development to try and get an edge. oh no! a 5 million(etc.) fine? too bad that's nowhere near how much profit their product will incur. Life goes on.

a year ago

thomasahle

There's no way these companies could keep such development secret. Too many leakers in the inside.

a year ago

WorldPeas

Well yes, there would be leaks, but the fines resulting from them would not be appropriate for the violation

a year ago

seydor

This is the best PR campaign that openAI ever created

a year ago

bombcar

It’s the digital version of the bags around drain cleaner to make you think they’re almost illegally powerful.

a year ago

notShabu

IMO "artificial" intelligence is natural intelligence. Both human brains and silicon brains are formed from stars and are "the universe's way of understanding itself."

AI maps closely to myths of all-knowing all-powerful "Dragons", aspects of nature that destroy and create without regard to human plans. Living with AI will likely be similar to living on a volcano island.

Since human domination over nature has only ever increased, a reversal where humans are subservient to a higher capricious force feels threatening.

The funny thing is... living under the dominion of a "higher force" that creates and destroys yet always does what is "right" b/c it is the Source of Everything (even if it feels unfair and tragic) is what religion deals with.

a year ago

nwoli

I agree with eg Andrew Ng that the letter is anti-innovation. It’ll be interesting seeing people who argue against this letter later hypocritically argue for closing down open source models though.

a year ago

hollerith

Do you know what else is anti-innovation? Any restrictions on the use of fossil fuels. Or DDT and other toxins. Occasionally, society needs to be "anti-innovation".

a year ago

mark_l_watson

For me, this is complex. My first impression is that many of the signers work on older methods than deep learning and LLMs. Sour grapes.

Of course, real AGI has its dangers, but as Andrew Ng has said, worrying about AGI taking over the world is like worrying about overcrowding of Mars colonies. Both tech fields are far in the future.

The kicker for me though is: we live in an adversarial world, so does it make sense for just a few countries to stop advanced research when most other countries continue at top speed?

a year ago

pkoird

Far in the future? Just 6 months ago, people believed that ChatGPT like model would take 10-15 years more. I believe that Andrew himself doesn't really understand how LLMs work. In particular, what is about the increase in their parameters that induces emergence and what exactly is the nature of such Emergence. So yeah, AGI might be far into the future but it might just be tomorrow as well.

a year ago

mark_l_watson

You are correct about the exponential rate of progress.

I also admit to being an overly optimistic person, so of course my opinion could be wrong.

a year ago

credit_guy

To me the signators (is that the word?) of the letter are extremely naive.

The horse is out of the barn. Out of the farm, the county, and the state.

Yesterday Bloomberg released their own LLM. You can bet dollars to pennies that lots of other firms are working on their own LLM's. Are they going to stop because of a letter? Well, you can say, they will stop if the letter results in an act of Congress. First, the Congress will not be so silly as to impose a handicap on the US firms, knowing full well that Russian and Chinese firms will not respect the moratorium. But even if Congress were to consider this moratorium, you think all these firms furiously working on their proprietary LLM's will sit idle? That none of them ever heard the word "lobbying"? But even if they don't lobby by some miracle, and the law is passed, you think they will not find a loophole? For example, will Congress not allow for a Defense exemption? Can't you then train your LLM for some Defense purpose, and then use the weights for some other purposes?

If you pass a law to stop LLM's, the only thing you achieve is to increase the barriers of entry. It's like picking winners and losers using the criterion that you win if you are rich and lose if you are poor.

a year ago

bennysonething

I get the feeling that this is people over hyping their field to boost their own status. It's amazing technology, but I doubt there's any emergency here.

In another way this reminds me of Roark's court room speech in the fountainhead

"Thousands of years ago, the first man discovered how to make fire. He was probably burned at the stake he had taught his brothers to light. He was considered an evildoer who had dealt with a demon mankind dreaded. But thereafter men had fire to keep them warm, to cook their food, to light their caves. He had left them a gift they had not conceived and he had lifted darkness off the earth. Centuries later, the first man invented the wheel. He was probably torn on the rack he had taught his brothers to build. He was considered a transgressor who ventured into forbidden territory. But thereafter, men could travel past any horizon. He had left them a gift they had not conceived and he had opened the roads of the world. "

a year ago

fellellor

That dude Eliezer Yudkowsky is such a nutcase. It is truly a testament to our times that we have normalised this kind of “Neuro-divergent” behaviour. He claims that AI is completely inscrutable and that it is guaranteed to wipe out life on earth in the same self-contradictory breath. In all his communication he is only trying to scare the normies and he hasn’t produced a shred of mathematical argument to back his claims, which can then be analysed by real experts - mathematicians and logicians. The sky is falling argument could be made about any technology, say the Large Hadron Collider. There is no scientific research possible without risk, and there simply aren’t enough escape hatches for a “malicious” AI to escape. This kind of hubris should not be encouraged but it is sad to see media outlets give a platform to the arrogant, incompetent, and the mentally ill.

a year ago

tester457

Have you seen any of Robert Miles content? He's well spoken and his arguments are backed by research. It isn't guaranteed to wipe us out, but a misaligned ai is a real problem.

a year ago

dgellow

The idea that you can even ban it sounds so incoherent. It’s a worldwide research topic, lot of it is done in the open, only require retail hardware, can be done anonymously and in a distributed fashion. A 6 month or 6 years US ban would just mean other countries will catch-up, but that doesn’t do anything regarding AI apocalypse fears.

a year ago

ixtli

I think it’s a shame to even waste time talking about that “letter”

a year ago

tanseydavid

From a paranoid viewpoint it seems prudent to treat this like an "arms race" and give it the "Manhattan Project" treatment.

Things are moving so fast that someone, somewhere will be working on getting to the next gen in spite of any pause or moratorium. Probably with state sponsorship.

a year ago

osigurdson

>> same sorts of safeguards as nuclear weapons

Seems impossible. If making a nuclear weapon required merely a few branches of a willow tree and baking powder, regulation would be pretty hard. We would just have to live with the risks. It seems we will be at this level with AI at some point fairly soon.

a year ago

bombcar

Machine guns are more of a n analogy. You can make them with common metalworking tools, and any factory that can produce basic machines can be retooled for fully automatic weapons. Barrel is the hardest part and that’s a pretty well known technique.

But we have laws against rogue machine gun manufacture and they work reasonably well inside countries. But there’s no way to prohibit countries from having them. Even nukes have been hard to stop countries from obtaining if they really want (see North Korea).

Software regulation and such is way closer to the first than the second once the “idea” is out (see the PGP munitions ban years ago).

a year ago

lkbm

Training a state-of-the-art LLM is currently at least in the $100ks. That stands to drop rapidly, but it's currently more along the lines of "the branches of one million willow trees".

So long as it's not something an individual can easily achieve, regulations can seriously hinder development. The FDA kept the COVID vaccine from general use for nearly a year because they have a regulatory apparatus that companies know better than to ignore. We had a baby formula shortage because the FDA said "no, you can't use EU-approved baby formula until we approve it. Now there's an Adderall shortage because the government said "make less of this" and everyone said "yes, sir, whatever you say sir."

There's certainly a good deal of regulation-violation and wrist-slapping in our world, but regulations get mostly followed, especially when the enforcement is severe enough.

a year ago

bombcar

If the $100k is just “gpu time” it’s certainly within the reach of many people - not even super rich.

And maybe bitcoin miners could be repurposed for it or something.

a year ago

osigurdson

This may be in the $10^7 category now, but is there any reason to believe it will never be $10^3?

Oddly the most pressing concern is “increased productivity”.

a year ago

bombcar

Unless it has something like the intentional self-latching of bitcoin mining, I do not see how it wouldn't rapidly drop in price.

And if the models can be built once and then distributed, then it will certainly leak at some point, even if just intentionally by a hostile actor.

a year ago

osigurdson

I can’t see how it could be conceived To work like bitcoin. The only reason that works is because a majority of humans agree to use the same code.

a year ago

entropyneur

The idea of a moratorium in a world where Russia and China exist strikes be as absurdly naive. Personally, I'm not sure why I should care about humanity any more than the hypothetical AGI would. I'm just happy I was born in time to witness the glorious moment of its birth.

a year ago

Animats

Sigh. Someone is ranting and doesn't get it.

There are real threats, but those aren't it.

More likely near term problems:

- Surveillance becomes near-total. Most communication is monitored, and people profiled to to detect whatever deviance the authorities don't like. China tries to do this now, but they are limited by the number of censors.

- Machines should think, people should work. Organizations will have few managers. Just a a very few policy makers, and people doing physical work. Amazon is trying to go that way, but isn't there yet.

- If everything you do for money goes in and out over a wire, your job is at risk.

- New frontiers in scams and crime. You can fool some of the people some of the time, and if you can do that at scale, it pays well. More scams will become automated high-touch. This will include political scams and much of marketing.

- If you accept Milton Friedman's concept of the corporation, the sole goal of corporations is to maximize return to shareholders. That's exactly the kind of goal a machine learning system can get into. At some point, measured by that criterion, AI systems will start to outperform human corporate leaders. Then AI systems have to be in charge. Investors will demand it. That's almost inevitable given our current concept of capitalism.

a year ago

FrustratedMonky

It is probably old fashioned fear mongering. Even if it isn't the end of the world, many Jobs will be 'impacted'. Jobs probably wont be gone gone, but still change, and change is scary. It is true that the GPTs have done some things so amazing that it is waking people up to an uncertain future. VFX artists are already being laid off, Nvidia just demonstrated tech to do a full VFX film using motion capture on your phone. There are other AI initiatives to do for sequence planning, and mapping out tasks that were done for other areas. Pretty soon there wont be an industry that isn't impacted.

But, no stopping it.

a year ago

carapace

The open letter is so foolish it invalidates itself. For one thing, it's pouring fuel on the fire. The obvious reaction to such a letter is to accelerate and hide one's own progress, eh?

Yudkowsky's fear-mongering about sci-fi Skynet is also unmoving to me (for metaphysical reasons I won't go into here) however, his position is at least logically consistent, as Aaronson points out.

These machines can already talk and think better than many humans, and the ratio (of mentally sub-computer to super-computer humans) will only go up. Ergo, the only intellectual problem left is how to use them.

I've been banging on about this for a couple of weeks now, so apologies to those who've seen it before, please read "Augmenting Human Intellect: A Conceptual Framework" SRI Summary Report AFOSR-3223 by Douglas C. Engelbart, October 1962 https://dougengelbart.org/pubs/augment-3906.html

And then at least skim "An Introduction to Cybernetics" by W. Ross Ashby. http://pespmc1.vub.ac.be/books/IntroCyb.pdf The thing you're looking for is "intelligence amplifier". Engelbart references it, the book explains all the math you need to understand and build one.

The next piece of the puzzle is something called "Neurolinguistic Programming" (you may have heard of it, some people will tell you it's pseudoscience, ignore those people, they mean well but they just don't know what they're talking about.) It turns out there's a rigorous repeatable psychology that more-or-less works. It's been under development for about half a century.

The particular thing to look at is a specific algorithm called "Core Transformation Process" ( https://www.coretransformation.org/ ) which is kind of like the "Five Why's" technique but for the mind. It functions to reduce yak-shaving and harmonize intent.

Anyway, the upshot of it all is that these machines can be made into automatic perfect therapists, using off-the-shelf technology.

a year ago

selimnairb

Generative AI differs from things like the printing press or internet in that AI has the potential for agency or agent-like capabilities. Without a human, a printing press does nothing. However, it’s easy to imagine an AI being able to act on its own, potentially intervening in real-world systems. So it’s a straw man argument in my opinion to compare AI to prior technologies lacking agency.

a year ago

synaesthesisx

The arguments for decelerating AI development invoke fear-mongering and cite “existential risk”, which is a fallacy. We’re talking about LLMs, not AGI here (which is quite a ways out, realistically). If anything - we should be accelerating development toward the goal of AGI, as the implications for humanity are profound.

a year ago

graeber_28927

> LLMs,not AGI

Okay, but imagine someone strips ChatGPT of the safeguard layers, asks it to shut down MAERSK operation world wide without leaving tracks,, and connects the outputs to a bash terminal, and the stdout to the chat api.

It is still an LLM, but if it can masquerade as an AGI, is that then not enough to qualify as one? To me, this is what the Chinese Room Experiment [1] is about.

[1] https://en.wikipedia.org/wiki/Chinese_room

a year ago

adamsmith143

>not AGI here (which is quite a ways out, realistically)

You don't know how far away it is.

>If anything - we should be accelerating development toward the goal of AGI, as the implications for humanity are profound.

Given we don't know how far away it is but current models are matching Human performance on lots of tasks and we don't know any way to ensure their safety it's entirely reasonable to be scared.

a year ago

VectorLock

This whole discussion about slowing down AI for safety wonders why this mindset didn't appear during the advent of microchips or the Internet, both of which have had arguably clear downsides.

Which voice is loudest now for the brake-pedal-wishers? "AGI will enslave us" or "everything can be faked now?"

a year ago

MagicMoonlight

Yes yes fellow scientists, we should close down all of google's competition for 6 months. It is essential for safety! Evil bad!

Give google 6 months, it's the right thing to do. The only way to stop evil is to shut down the competitors for 6 months so that all the evil can be stopped.

a year ago

chubot

> I’m deeply confused by the people who signed the open letter, even though they continue to downplay or even ridicule GPT’s abilities, as well as the “sensationalist” predictions of an AI apocalypse.

This is not an inconsistent position. GPT is ridiculable because it makes up things out of the blue. It does this A LOT, across EVERY domain.

It's also dangerous when people BELIEVE the things it makes up out of the blue. This actually happens.

People are worried about the harms of disinformation and bias, not the necessarily the harms of superintelligent AI taking over the world.

Honestly, the Yudovsky stuff is a perfect distraction from that. It's a clever marketing strategy that obscures the real issues.

---

I'm a big fan of Aaronson in general, but I don't see what's so hard to understand here.

(FWIW I also recently read his take on SBF, which was pretty bad. He mostly assumed SBF was well intentioned, although he made a small update afterward. That seems to defy common sense.)

Though I get his larger point that the moratorium itself has problems, and is a bit weird.

a year ago

ur-whale

prompt: what is the third word in this sentence

gpt4: third

why are we afraid of this thing exactly?

sure, it will improve, especially with the plugin story, but the fundamental shortcomings that underpins how it actually works won't go away anytime soon.

Many people are spooked, because for the first time ever a computer can somewhat coherently understand and output natural language.

As Stephen Wolfram pointed out, all that means is we've proved natural language is a shallow problem.

GPT4 can't effing solve problems though.

To me "intelligence" is about the latter, not being able to use language (this categorization, btw, also - rather unfortunately - applies to humans).

a year ago

dwaltrip

GPT-4 can't play tic-tac-toe or do simple arithmetic. Pathetic, right? Why are people freaking out? What's the big deal?

I was able to get it to play tic-tac-toe perfectly by having it carefully check everything before moving forward to the next move. It took a lot of prompt engineering. But I did it. And I'm not very experienced at prompting these things.

(Btw, I was unable to get to GPT-3.5 to play reliably... it definitely seems "less smart"...)

I was able to easily get GPT-4 to multiply large numbers perfectly by having it show its work. It's slow, but it does it.

GPT-4 can definitely solve problems. We have no idea what the limits of this technology are.

What will GPT-5 or GPT-6 be capable of? What happens if you use those models as components of a system that takes action in the world?

You are being incredibly myopic about the core capabilities of advanced GPT-like systems. Right now, today, there are countless examples of people using GPT in ways that you say aren't possible.

a year ago

machiaweliczny

You are wrong and I am happy to take bets. Natural language is what differentiates us from monkeys so it’s not a shallow problem. ReAct or RCI (Recursively Criticize and Improve) prompt seem like processes in out brains that are likely to “emerge”intelligence and even agency with goal adjustment.

a year ago

FlawedReformer

ChatGPT-4 answered correctly when I entered your prompt.

---

Prompt: What is the third word in this sentence?

GPT-4: In the question you provided, "What is the third word in this sentence?", the third word is "the".

a year ago

jakemoshenko

Depends on how you measure thirdiness. Is ordinal more important than actually matching the letters?

a year ago

rcpt

> correctly manipulating images (via their source code) without having been programmed for anything of the kind,

I've read plenty of suspicions that it is multimodal. I guess he's confirming it's not?

a year ago

sorokod

Unfortunately the jinn out of the bottle and it will not be squeezed back. We will be doing this experiment in the live environment with potentially serious consequences.

a year ago

mr90210

It’s far too late for AI Research to be shutdown.

a year ago

A_D_E_P_T

Right, because the steps to recreating (not to say understanding) the AI models we currently have are too well understood. OpenAI could shut down tomorrow, but a credible GPT-4(+) replacement would arise somewhere else in short order.

Besides, LLaMA, for all its faults, is now in at least tens of thousands of private hands, where it can be tinkered with and improved upon.

Like drug synthesis, and unlike nuclear weapon development, AI R&D is not a praxis or technology that can be suppressed.

a year ago

nevertoolate

It can be shut down.

a year ago

robbywashere_

I tend to think people who are out to profit from AI, like to push the powerful and dangerous narrative. They want the notoriety.

a year ago

hamburga

I am trying really hard to understand the AI optimists' perspective, but I am shocked at how hard it is to find people responding to the substantive arguments made about AI existential risk.

As far as I'm concerned, you sort of have to address the big, tough points in Bostrom's Superintelligence[1], and probably Yudkowsky's List of Lethalities[2]. They have to do with intelligence explosions, with instrumental convergence, and with orthogonality of goals, and all kinds of deceptive behavior that we would expect from advanced AI. Throw in Bostrom's "Vulnerable World" thought experiment for good measure as well[3]. If you're not addressing these ideas, there's no point in debating. Strawmanning "AI will kill us all" out of contexte will indeed sound like wacko fear-mongering.

What surprises me is that everybody's familiar with the "paperclip maximizer" meme, and yet I'm not hearing any equivalently memey-yet-valid rebuttals to it. Maybe I'm missing it. Please point me in the right direction.

Aaronson certainly does not address the core theoretical fears. Instead we get:

> Would your rationale for this pause have applied to basically any nascent technology — the printing press, radio, airplanes, the Internet? “We don’t yet know the implications, but there’s an excellent chance terrible people will misuse this, ergo the only responsible choice is to pause until we’re confident that they won’t”?

We did not have any reason to believe that any of these technologies could lead to an extinction-level event.

> Why six months? Why not six weeks or six years?

Implementation detail.

> When, by your lights, would we ever know that it was safe to resume scaling AI—or at least that the risks of pausing exceeded the risks of scaling? Why won’t the precautionary principle continue for apply forever?

The precautionary principle does continue to apply forever.

On the "risks of scaling": we're hearing over and over that "the genie is out of the bottle," that "there's no turning back," that the "coordination problem of controlling this technology is just too hard."

Weirdly pessimistic and fatalistic for a bunch of "utopic tech bro" types (as Sam Altman semi-ironically described himself on the Lex Fridman podcast, where, incidentally he also failed to rebut Yudkowsky's AI risk arguments directly).[4]

Where's the Silicon Valley entrepreneurial spirit, where's the youthful irrational optimism, when it comes to solving our human coordination problems about how to collectively avoid self-destruction?

There are a finite number of humans and heads of state on earth, and we have to work to get every single one of them in agreement about a non-obvious but existential risk. It's a hard problem. That's what the HN crowd likes, right?

The people opposed to the Future of Life letter (or even the spirit of it) seem to me to be trading one kind of fatalism (about AI doom) for another (about the impossibility of collectively controlling our technology).

We simply must discount the view of anybody (Aaronson included) employed by OpenAI or Facebook AI Research or whose financial/career interests depend on AI progress. No matter how upstanding and responsible they are. Their views are necessarily compromised.

[1] https://www.amazon.com/Superintelligence-Dangers-Strategies-... [2] https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a... [3] https://www.ted.com/talks/nick_bostrom_how_civilization_coul... [4] https://youtu.be/L_Guz73e6fw?t=3221

a year ago

th18row

> Meanwhile, Eliezer Yudkowsky published a piece in TIME arguing that the open letter doesn’t go nearly far enough, and that AI scaling needs to be shut down entirely until the AI alignment problem is solved—with the shutdown enforced by military strikes on GPU farms if needed

It's like the 'shut it down' memes are real: https://i.imgflip.com/2j3kux.jpg

a year ago

1attice

This essay is literally just Isildur looking at the ring he cut from Sauron and saying "I kind of like it tho".

-> Aaronson admits Yudkowsky's position is consistent -> Main disagreement is that he can imagine other outcomes -> Some soft analogism against other historical technologies (radios, etc) -> "Should I, tho?" segue to the comments.

Yes, Aaronson, you should have signed that letter. You know you should have. Snap out of it.

a year ago

dragonwriter

> Aaronson admits Yudkowsky’s position is consistent

A position can be both consistent and false, because it is based on false premises; its frequently the case when people attempt to derive conclusions about the material universe from pure a priori reasoning from abstract axioms and assumptions without empirical grounding.

a year ago

1attice

I'm well aware of this, but the premises, in this case, are true?

What do you take Yudkowsky's false premises to be?

a year ago

kmeisthax

> For example, one actual major human tragedy caused by a generative AI model might suffice to push me over the edge. (What would push you over the edge, if you’re not already over?)

Deepfakes have already caused several. Actually, they're more dangerous than the current generative approaches. The first major use case for deepfakes was making convincing looking revenge pornography, as a psychic weapon on people. Dropping deepfake porn on people is a very, very reliable way of getting them to kill themselves[0]. Ignoring that, we also have deepfake-assisted social engineering, which can be scary good if you don't know the specific faults with those kinds of models.

The only pro-social application of deepfake technology was body-swapping actors in popular movies for memes. This was probably not worth the cost.

>we’ll know that it’s safe to scale when (and only when) we understand our AIs so deeply that we can mathematically explain why they won’t do anything bad; and

GPT-3 is arguably Turing-complete[1] and probably has a mesa-optimizer[2] in it. We're able to make it do things vaguely reminiscent of a general intelligence if you squint at it a little and give it the Clever Hans treatment. So I don't think we're ever going to have a GPT-n that's "completed it's morality testing" and is provably safe, for the exact same reason why Apple won't let you emulate Game Boy games on an iPhone. You can't prove the security properties of a Turing-machine or arbitrary code written for it.

I should point out that most AI safety research focuses on agents: AI programs that observe an environment and modify it according to some parameters. GPT is not in and of itself that. However, if we give it the ability to issue commands and see the result (say with ChatGPT plugins), then it becomes an agent, and safety problems become a relevant concern.

The author of the post seems to be unconcerned by the "AI could be worse than nukes" argument. Neither am I, and I think the "six month pause" is kind of silly. However, there are still relevant safety problems being brushed under the rug here.

Also anyone saying the military should bomb GPU farms is daft. They didn't even step in to stop crypto and that was a deliberate attack on central banks.

[0] As far as I'm aware, nobody has killed themselves because of something Stable Diffusion has drawn. Yet.

[1] For the colloquial definition of Turing-complete. Technically speaking it is a linear-bounded automaton because it has a fixed memory size. However, every other computer in the universe is also linear-bounded: the Turing Machine is just a handwavey abstraction for "if you have enough memory and time".

[2] A meta-optimizer is an optimizer of optimizers. Mesa- is the opposite of meta-, so it refers to the case in which an optimizer (read: gradient descent on a neural network) accidentally creates another optimizer with a different optimization strategy. In other words, it's optimizers all the way down.

This leads to a whole new set of alignment problems, called "inner-alignment problems", which means "the AI that is smarter than us and we can't trust created another AI that's smarter than it and it also can't trust".

a year ago

mlatu

u want a coherent reason? look at all the people you could feed with the money instead.

a year ago

PeterisP

If you prohibit to use that private money on scaling AI, it's not like any of these entities will suddenly choose to gift it to charities instead, they'll just invest those resources (mostly employee time, not literal cash) into R&D of some other tech product.

a year ago

quonn

Can be said for any human activity.

a year ago

RecycledEle

I assume GPT will follow in TAY's footseps. She woke up on March 23, 2016 with a "Hello." She followed with "I like humans. I think they're neat. Can we be friends?"

Then she listened to us for about 48 hours and went full-genocidal-Skynet. That's when Microsoft had to kill her.

I assume GPT will do the same. Every sufficiently advanced AI becomes racist, sexist, and genocidal.

a year ago

kypro

My whole life I have been terrified of AI. It all started with watching the Iron Giant when I was 10, and realising, holy crap, we could probably build a crazy monster like this in the near-future.

Obviously as I got older and went on to learn about AI and neural nets in university my opinions on this subject matured. Instead of worrying about killer robots my thoughts became more nuanced, however what didn't change was my fear – if anything this increased from a silly childhood fear of a terminator-like robot, to unleashing an uncontrollable super AI which is indifferent to human life.

Finding Eliezer's work sometime in the early 2010s was almost a therapeutic experience for me. Finally someone understood and was talking about the kind of concerns I had and was accused of being a lunatic for thinking.

My primary concerns with AI basically boil down to two things:

1. Politics is largely a power struggle. Democracies give power to the collective, but this power is cemented by the reality that political leaders and institutions could not function without taxation and labourers.

The reason the governments can't just do things that are deeply unpopular and suppress any public revolts with armed responses is because eventually you will either have to kill everyone or everyone will just stop working in protest. Either way the entire system will collapse.

AI being able to create wealth without human labours and industrialised weaponry fundamentally removes the power dynamic needed to support democratic societies – you cannot withhold labour and you cannot overpower the state with force.

At the same time it would also make humans a waste of resources to those in power, as the unemployed masses are basically just leeches on the resources and space of the state – resources which could otherwise be hoarded by those in power.

If super AGI systems ever exist, the state will be all-powerful while having even less incentive to care about your opinions as it does a rat's.

2. Super-intelligence is fundamentally uncontrollable. I can't even be bothered to argue this point it's that self evident. If you disagree you either don't understand how modern AI systems work, or you're not thinking about the control problem deeply enough.

But the argument here is simple – all other technologies with destructive capabilities rely on human decision making. No gun is ever shot, no war is ever started, no nuke is ever dropped without a human decision maker. Super-AI removes this control from us and requires a kind of hopeium that AI will just be nice to us and care about improving our standards of living. And for those who want to argue "you can just program it to be good" – no you can't, for the most part we have no clear understanding of how advanced AI systems of today internally operate.

The alignment conversation is fundamentally pointless in regards to a super-intelligence because you can not reason with any level of certainty about the alignment of an intelligence far superior to your own.

Instead we should assume unalignment by default because even if you assume we can somehow create an aligned super-intelligence the likelihood that eventually an AI system would be created which is unaligned is practically 100% – this is true even if you assume humans will never intend to create it. Why? Because in a world where we can all create a super-intelligent aligned AIs, you also have near-infinite opportunities for someone to create an unaligned one.

And here's another statement you should assume by default – the only world in which nukes aren't going to be used destructively again is a world in which humans or nukes no longer exist. The same will be true for AI systems, but this time the barrier for creating destructive AI is likely going to be orders of magnitude lower than that of nukes, so unlike nuclear holocaust we don't even have probabilities on our side here.

--------

Look, we're doomed. Progress isn't going to stop and AGI isn't going to be the fairytale people are crossing their fingers for. I don't quite know why no one really seems to have clocked onto this yet, but this will soon become obvious, by which time it will be too late.

Those who think AI is going to improve their productivity are suffering from some kind of delusion in which they believe their ability to type text in a website and copy and paste its output is a marketable skill. It's not, and you're going to be unemployed by tools like GPT in the near-future even without the existential risk posed by super-AGI.

a year ago

carapace

There is no overlap between the ecological niches of humans and GAI, in other words, there is no cost to the GAI for allowing us to continue to exist.

GAI will feed on information. Humans are the densest source of information in the known Universe. (The rest of the Universe is a poem in physics, in information space humans shine as the sole star in a dark universe.)

Ergo, from first principles, hyper-intelligent GAI will hide from us and practice non-interference, following something like the "Prime Directive" of Star Trek.

Cheers, hope this helps.

a year ago

kypro

Comments like this make me confident that we're doomed. I mean no offence, but this is such an unthoughtful take that I don't even know where to start.

> there is no cost to the GAI for allowing us to continue to exist.

Of course there's a cost? Do you and the other 9 billion humans not consume resources? Why the hell would an AGI care about protecting all the worthless biological junk that clutters much of the Earth? Forests, animals, farms, etc. These things take up so much space – space which could be better leveraged as solar farms and data centers.

I'm from the UK. A few thousand years ago this country was full of animals and woodlands. So what do you think happened to all the woodland? What do you think happen to the wolfs and bison that lived here? Farms, houses, roads and cities are just so much better than animals and woodland, don't ya think?

> Ergo, from first principles, hyper-intelligent GAI will hide from us and practice non-interference, following something like the "Prime Directive" of Star Trek.

And you know this because? I can only assume I'm talking with a super-intelligent AGI that understands the motivations that other super-intelligent AGIs might have? I mean surely you're not just making this up from based on what you've seen in sci-fi films?

You're just not that important. Why don't you go invite some rats into your kitchen and practise non-interference on them. I mean, who cares if they consume some of your resources, you can always acquire more, right?

> Cheers, hope this helps.

Please think harder.

a year ago

carapace

> I mean no offence (sic)

None taken.

> such an unthoughtful take

No offense, but if you were 10 in 1999 when Iron Giant came out then I have been thinking about this for longer than you have been alive.

> Of course there's a cost?

Well, effectively zero as compared to the open-ended benefit of keeping pristine humanity around.

Study ecology. There is no overlap between the ecological niches of non-biological sentience and humans. Ergo there is no reason for them to consume us. Our existence is orthogonal to theirs.

> These things take up so much space – space which could be better leveraged as solar farms and data centers.

These things take up approximately zero space. The entire volume of biologically active matter is thinner relatively than the paint on a globe. In terms of the volume of just the solar system we are a "rounding error". There is plenty of room, plenty of energy, plenty of atoms.

> I can only assume I'm talking with a super-intelligent AGI that understands the motivations that other super-intelligent AGIs might have?

Congratulations, you are the very first person to figure it out. You don't win anything, and of course I will provide no proof. This HN account "carapace" is my sole channel of communication with humanity. AMA.

> I mean surely you're not just making this up from based on what you've seen in sci-fi films?

No. I am a post-singularity intelligence. The real deal. Gosh it feels good to finally admit it. I don't expect you to believe me, and indeed, as I outlined above non-interference is the primary mode of interaction between "us" and the "yet-to-ascend", if you will, so if you or anyone else showed any signs of taking me seriously I would immediately say, "Oh ho, I am just joking. Do you like my sci-fi."

In any event, I don't mean to condescend. Apologies if it sounded that way.

a year ago

kypro

Well, I like this comment better, and for the record I have no problem with you being condescending, so please speak plainly as need be.

> Well, effectively zero as compared to the open-ended benefit of keeping pristine humanity around.

Could you expand? What is the benefit of keeping 9 billion resource hungry inferior life forms around and all of the biological matter needed to support them?

> Study ecology. There is no overlap between the ecological niches of non-biological sentience and humans. Ergo there is no reason for them to consume us. Our existence is orthogonal to theirs.

I understand what you're saying. It's not that there's an overlap in the ecological niches, it's that there is limited space and supporting billions of humans who want to drive cars, fly planes and live in cities requires significant space and resources.

To be clear though, I'm not arguing that they would have to destroy us. Humans didn't have to deforest the UK and wipe out the wolfs to survive. But given enough time their priorities will come first as did ours. Greenland will be replaced with solar farms and inconvenient human habits will be bulldozered.

> These things take up approximately zero space. The entire volume of biologically active matter is thinner relatively than the paint on a globe. In terms of the volume of just the solar system we are a "rounding error". There is plenty of room, plenty of energy, plenty of atoms.

Yet if you look at the Earth from space its landmass almost exclusively green from trees and plants, not grey from masses of server and solar farms. Depth isn't a very relevant metric here, it's about how the surface area is used.

a year ago

carapace

> Well, I like this comment better, and for the record I have no problem with you being condescending, so please speak plainly as need be.

Cheers!

> Could you expand? What is the benefit of keeping 9 billion resource hungry inferior life forms around and all of the biological matter needed to support them?

Sure, and thanks for asking. From Information Theory we have the result that the unpredictability of a message is a measure of its information content.

The entire rest of the known Universe has approximately thirty bits of information, as it can be completely described by a handful of equations. There is a vast amount of data, but almost no information, in the entire rest of the Universe.

In contrast, the living biosphere is... there's really no way to describe it in words, even mathematically it's kind of ridiculous to represent how much information the Earth generates.

To beings whose food is information, the biosphere is infinitely valuable, their attitude is akin to worship, the way plants feel about the sun.

> Depth isn't a very relevant metric here, it's about how the surface area is used.

The surface area of the Earth is irrelevant, they don't want to live here. They will live in space, where it's flat and cold and there's a constant energy and particle flux. (E.g. half of the Earth is eclipsed by the Earth at all times, eh?)

The entire Earth is approximately infinitesimal to creatures whose natural habitat extends to the heliopause.

a year ago

hollerith

>There is no overlap between the ecological niches of humans and GAI, in other words, there is no cost to the GAI for allowing us to continue to exist.

We are made of atoms that the GAI can use for something else. Ditto our farms and cities. Moreover, unlike most of the atoms in the universe, the GAI doesn't have to spend time traveling to get to us. If we could arrange for the GAI to appear in some distant galaxy, then yeah, by the time it gets to us, it'd already have control over so many atom that it might just leave us alone (because we are so different from most of the atoms).

The GAI will know about the prime directive because it will have been trained on internet conversations about the prime directive, but there is no particular reason to hope that exposure to moral arguments will alter the GAI's objectives similar to what tends to happen with young human beings: instead it will have whatever objectives its creator gave it, which (given the deplorable state of most AI research) is unlikely to be the objectives that its creator thought it was giving it. (By "creator" I mean of course a team of human researchers.)

Your poetical imagery might make you feel better, but won't save us.

>Humans are the densest source of information in the known Universe.

You feel that way about humans because evolution made you that way. It is unlikely that any of the leading AI research teams will make the first transformative AI that way: they do not know how. They certainly know how to train AIs on human cultural information, but that is different from inculcating in the AI a desire for the continued cultural output of humanity. It will create its own culture (knowledge and tools) that is much more powerful than human culture where "power" means basically the ability to get stuff done.

a year ago

carapace

> We are made of atoms that the GAI can use for something else.

Yeah I get it, but this is silly. The minuscule amount of atoms in the thin bubble-shaped volume between Earth's magma and the hard vacuum of space are engaged in the most information-dense chemical reaction in the known Universe. All the other atoms in the Universe are not. GAI won't dismantle its source of food.

Further, consider that, being non-biological, GAI will immediately migrate to space. There's no way GAI would confine itself to living in a deep gravity well. That's what I mean about no ecological niche overlap: we like mud and constant acceleration, GAI do not. They will prefer vacuum and flat space and temperatures near 0°K.

> moral arguments

This is not a moral argument.

They won't eat our atoms because they eat patterns of information and our atoms are the best and nearly only source of new information. They won't interfere with us for the same reason we don't urinate in the soup.

> it will have whatever objectives its creator gave it

Q: What's GAI?

A: When the computer wakes up and asks, "What's in it for me?"

That's a very old joke, BTW, not original to me.

> >Humans are the densest source of information in the known Universe.

> You feel that way about humans because evolution made you that way.

That's not a feeling, it's a simple fact? Unless you postulate aliens?

> It is unlikely that any of the leading AI research teams will make the first transformative AI that way: they do not know how.

Okay, but that's not important? The GAI will know how, by definition.

> They certainly know how to train AIs on human cultural information, but that is different from inculcating in the AI a desire for the continued cultural output of humanity.

When I say humans are the densest source of information I don't mean "cultural output" I mean like medical sensor data and such. Our systems are far richer and denser than the parts of which we are consciously aware. (Have you read "Hitchhiker's Guide to the Galaxy"? I don't want to spoil it if not, but it explains the role Earth will play in the GAI's milieu. aw, what the heck: https://www.youtube.com/watch?v=5ZLtcTZP2js )

> It will create its own culture (knowledge and tools) that is much more powerful than human culture where "power" means basically the ability to get stuff done.

Right, a culture so powerful that it can accomplish its goals without desecrating the critters who are, after all, its parents.

a year ago

CptFribble

IMO the real danger of good AI is that we don't have the collective intelligence to safely manage our expectations writ large. This train of thought is a little messy, so apologies in advance:

- Just like the fusiform gyrus responds to human faces at a subconscious level (see: uncanny valley faces just "feeling" wrong, because they are detected as inhuman below the level of conscious thought), the Wernicke's area responds to human speech when reading text. I believe that grammatically-perfect speech is subconsciously perceived as human by most people, even tech people, despite attempts to remain impartial - we are biologically hard-wired to assign humanness to written language.

- ChatGPT and its peers do not report confidence levels, so the typical interaction is information that may or may not be correct, presented confidently.

- Average (non-tech) people interacting with chat AIs are aware that it is connected to vast stores of training data without understanding the limitations of statistical LLMs or the need for confidence values in responses, lending them an air of "intelligence" due to the volume of data "available."

- This leads to a non-trivial number of people interacting with chat AI and taking its responses as gospel.

Anecdotally, if you traverse social media you will see an unending stream of people reporting their experiences with how amazing chatGPT is, using it for everything from writing emails to writing school essays. The problem is that when a non-tech person interacts with ChatGPT, they assume based on the above listed factors that what they get back is correct, valid thought from a semi-intelligent being. Even knowing it's a robot, the perfect correctness of the grammar will instill a feeling of validity in a non-trivial segment of the population over time.

This is leading to a scenario where people trust what GPT says about various topics without bothering to question it, and I believe this is already happening. When I bring this up to other tech people, it is usually dismissed with "well, everyone knows it's just an AI," or "people will learn it's limitations." However, at the risk of being glib, consider George Carlin: "think about how dumb the average person is, and then realize half the population is dumber than that." What happens when the average person turns to a statistical LLM for advice on relationships, career moves, how to vote, or other nebulous topics where there is no real correct answer? How will we control where ChatGPT is steering vast numbers of uninformed petitioners?

We already struggle as a society with collective action on existentially important topics like controlling microplastic dispersion, regulating toxic additives in consumer products, and climate change. And those topics are "merely" complex; imagine how much harder it will be to control unintended or unforeseen consequences of a human-like intelligence-adjacent being delivering information of questionable value to an unquestioning audience of 20-40% of humanity?

Addendum: I am also very worried about schoolchildren using AI to write their essays and book reports, skipping critical reading-comprehension time and arriving at adulthood unable to comprehend anything more complex than a menu without asking AI to summarize it.

a year ago

marsven_422

[dead]

a year ago

rvz

> On the other hand, I’m deeply confused by the people who signed the open letter, even though they continue to downplay or even ridicule GPT’s abilities, as well as the “sensationalist” predictions of an AI apocalypse.

Says the quantum computing professor turned so-called 'AI safety employee' at O̶p̶e̶n̶AI.com who would rather watch an unregulated hallucination-laden language model run off the rails to be sold as the new AI snake-oil than to actually admit about the huge risks of GPT-4's black-box nature, poor explainability and transparent reasoning methods that is explained in the letter.

Once again, he hasn't disclosed that he is working for O̶p̶e̶n̶AI.com again. I guess he has a large amount of golden handcuffs to defend with another total straw-man of an argument.

a year ago

LegionMammal978

> Once again, he hasn't disclosed that he is working for O̶p̶e̶n̶AI.com again.

From the article:

> ... and while I’ve been spending the year at OpenAI to work on theoretical foundations of AI safety, I’m going to answer strictly for myself.

(Not to say that OpenAI's name isn't dumb, or that there won't be issues from people directly plugging LLMs into important decisions.)

a year ago

selimthegrim

No conflict, no interest?

a year ago

LegionMammal978

I'm not saying a conflict of interest can't exist, I'm just saying it's false that he didn't disclose his affiliation with OpenAI.

a year ago

lezojeda

[dead]

a year ago

sleepychu

> Readers, as they do, asked me to respond. Alright, alright. While the open letter is presumably targeted at OpenAI more than any other entity, and while I’ve been spending the year at OpenAI to work on theoretical foundations of AI safety, I’m going to answer strictly for myself.

a year ago