Licensing is neither feasible nor effective for addressing AI risks

192 points
1/20/1970
10 months ago
by headalgorithm

Comments


dwallin

Any laws should be legislating the downstream effects of AI, not the models themselves. Otherwise we will quickly get to a place where we have a handful of government-sanctioned and white-washed “safe” models responsible for deleterious effects on society, with plausible deniability for the companies abusing them.

Legislating around the model is missing the point.

There is no evidence that a runaway artificial intelligence is even possible. The focus on this is going to distract us from the real and current issues with strong AI. The real risks are societal instability due to:

- Rapid disruption of the labor market

- Astroturfing, psyops, and disruption of general trust (commercially and maliciously, both domestic and foreign)

- Crippling of our domestic ai capabilities, leading to cutting edge development moving overseas, and a loss of our ability to influence further development.

- Increased concentration of power and disruption of decentralized and democratic forms of organization due to all of the above.

10 months ago

circuit10

> There is no evidence that a runaway artificial intelligence is even plausible

Really? There is a lot of theory behind why this is likely to happen and if you want a real example there is a similar existing scenario we can look at, which is how humans have gone through an exponential runaway explosion in capabilities in the last few hundred years because of being more intelligent than other species and being able to improve our own capabilities through tool use (in the case of AI it can directly improve itself so it would likely be much faster and there would be less of a cap on it as we have the bottleneck of not being able to improve our own intelligence much)

10 months ago

dwallin

The theories all inevitably rely on assumptions that are essentially the equivalence of spherical cows in a frictionless universe.

All evidence is that costs for intelligence likely scale superlinearly. Each increase in intelligence capability requires substantially more resources (Computing power, training data, electricity, hardware, time, etc). Being smart doesn’t just directly result in these becoming available with no limit. Any significant attempts to increase the availability of these to a level that mattered would almost certainly draw attention.

In addition, even for current AI we don’t even fully understand what we are doing, even though they are operating at a lower generalized intelligence level than us. Since we don’t have a solid foundational model for truly understanding intelligence, progress relies heavily on experimentation to see what works. (Side note: my gut is that we will find there’s some sort of equivalent to the halting problem when it comes to understanding intelligence) It’s extremely likely that this remains true, even for artificial intelligence. In order for an AI to improve upon itself, it would likely also need to do significant experimentation, with diminishing returns and exponentially increasing costs for each level of improvement it achieves.

In addition, a goal-oriented generalized AI would have the same problems that you worry about. In trying to build a superior intelligence to itself it risks building something that undermines its own goals. This increases the probability of either us, or a goal-aligned AI, noticing and being able to stop things from escalating. It also means that a super intelligent AI has disincentives to build better AIs.

10 months ago

PeterisP

The way I see it, it's clear that human-level intelligence can be achieved with hardware that's toaster-sized and consumes 100 watts, as demonstrated by our brains. Obviously there is some minimum requirements and limitations, but they aren't huge, there are no physical or info-theoretical limits that superhuman intelligence must require a megawatt-sized compute cluster and all the data on the internet (which obviously no human could ever see).

The only reason why currently it takes far, far more computing power is that we have no idea how to build effective intelligence, and we're taking lots of brute force shortcuts because we don't really understand how the emergent capabilities emerge as we just throw a bunch of matrix multiplication at huge data and hope for the best. Now if some artificial agent becomes powerful enough to understand how it works and is capable of improving that (and that's a BIG "if", I'm not saying that it's certain or even likely, but I am asserting that it's possible) then we have to assume that it might be capable of doing superhuman intelligence with a quite modest compute budget - e.g. something that can be rented on the cloud with a million dollars (for example, by getting a donation from a "benefactor" or getting some crypto through a single ransomware extortion case), which is certainly below the level which would draw attention. Perhaps it's unlikely, but it is plausible, and that is dangerous enough to be a risk worth considering even if it's unlikely.

10 months ago

SanderNL

Using this logic, flexible, self-healing, marathon running, juggling and childbearing robots that run on the occasional pizza are just around the corner, because, nature.

It might us a thousand years to get anywhere close? I don’t see the good arguments for all of this happening soon.

10 months ago

rl3

It'd be interesting if we could calculate the amount of power consumed in aggregate by evolutionary processes over millions of years.

Unfortunately we could probably optimize it, a lot.

10 months ago

pixl97

So with your closing comment you're arguing it is possible, now you're just talking about time frames.

All I have to say is that in 1900 many thought flights by heavier than aur craft were tens of thousands of years away. 3 years later it was achieved.

10 months ago

Hasu

Of course in the 1950s many thought that computer vision and artificial intelligence were only a few months to years away, and here we are 70 years later and we're still working on those problems.

Predicting the future is hard. Some problems are harder than expected, others are easier than expected. But generally I'd say history favors the pessimists, the cases where a problem gets solved suddenly and there's a major breakthrough get a lot of press and attention, but they're a minority in the overall story of technological progress. They're also unpredictable black swan events - someone might crack AGI or a unified theory of physics tomorrow, or it might not happen for ten thousand years, or ever.

10 months ago

spookie

I firmly believe that we are severely underestimating the problem space. Given there are a multitude of scientific areas focusing on human nature, and even then, they've shown difficulty on explaining each of its parts.

Look, we can make assumptions given a much simpler technology, and the outcomes of our past selves. But, while the physics for those wings was pretty much good enough at the time, the aforementioned scientific areas aren't. And we know it.

10 months ago

williamtrask

> there are no physical or info-theoretical limits that superhuman intelligence must require a megawatt-sized compute cluster and all the data on the internet (which obviously no human could ever see).

Much of your "intelligence" is a function of natural selection. This is billions of years X bajillions of creatures in parallel, each processing tons of data at a crazy fast sampling rate in an insanely large/expensive environment (the real world). Humanity's algorithm is evolution moreso than the brain. Humans learn for a little while, start unlearning, and then die — which is an important inner for loop in the overall learning process.

Taken together, there is some evidence to suggest that superhuman intelligence must require megawatt-sized compute cluster and all the data on the internet (and a lot... LOT more)

10 months ago

jabradoodle

Evolved creatures are somewhat handicapped by needing to make only incremental changes from one form to the next, and needing to not be eaten by predators while doing so.

That isn't strong evidence of what would be required by a well engineered system with none of those constraints.

Intelligence is not the end of goal of evolution, it is a by product.

10 months ago

pixl97

LLMs we're messing with are text data only, we're barely starting to eat video data in multi modal LLMs. This world doesn't lack data.

10 months ago

mrtranscendence

I'm not sure why watching Linus Tech Tips and makeup tutorials is going to give AI a better shot at super-intelligence, but sure?

10 months ago

AnthonyMouse

> Each increase in intelligence capability requires substantially more resources (Computing power, training data, electricity, hardware, time, etc). Being smart doesn’t just directly result in these becoming available with no limit. Any significant attempts to increase the availability of these to a level that mattered would almost certainly draw attention.

We know that "intelligence" can devise software optimizations and higher efficiency computing hardware, because humans do it.

Now suppose we had machines that could do it. Not any better, just the same. But for $10,000 in computing resources per year instead of $200,000 in salary and benefits. Then we would expect 20 years worth of progress in one year, wouldn't we? Spend the same money and get 20x more advancement.

Or we could say 20 months worth of advancement in one month.

With the current human efforts we've been getting about double the computing power every 18 months, and the most recent ones come in terms of performance per watt, so then that would double in less than a month.

For the first month.

After which we'd have computers with twice the performance per watt, so it would double in less than two weeks.

You're quickly going to hit real bottlenecks. Maybe shortly after this happens we can devise hardware which is twice as fast as the hardware we had one second ago every second, but we can't manufacture it that fast.

With a true exponential curve you would have a singularity. Put that aside. What happens if we "only" get a thousands years worth of advancement in one year?

10 months ago

dwallin

I would say that if we experienced that, we would likely experience societal collapse far before the singularity became a problem. At which point the singularity could be just as likely to save humanity as it would be to doom it.

10 months ago

theptip

You seem to be arguing against a fast takeoff, which I happen to agree is unlikely, but nothing you say here disproves the possibility of a slower takeoff over multiple years.

> It also means that a super intelligent AI has disincentives to build better AIs.

I think this argument is extremely weak. It makes two obviously fallacious assumptions:

First, we simply have no idea how these new minds will opine on theory of mind questions like the Ship of Theseus. There are humans who would think that booting up a “Me++” mind and turning themselves off would not mean they are dying. So obviously some potential AI minds wouldn’t care either. Whether specific future minds care is a question of facts but you cannot somehow logically disapprove either possible state.

Second, you are assuming that there is no “online upgrade” whereby an AGI takes a small part of itself offline without ceasing its thread of consciousness. Again, logic cannot disprove this possibility ahead of time.

10 months ago

arisAlexis

"In addition, even for current AI we don’t even fully understand what we are doing"

That is the problem, don't you get it?

10 months ago

dwallin

If that’s your concern than lets direct these government resources into research to improve our shared knowledge about them.

If humans only ever did things we fully understood, we would have never left the caves. Complete understanding is impossible so the idea of establishing that as the litmus test is a fallacy. We can debate what the current evidence shows, and even disagree about it, but to act as if only one party is acting with insufficient evidence here is disingenuous. I’m simply arguing that the evidence of the possibility of runaway intelligence is too low to justify the proposed legislative solution. The linked article also made a good argument that the proposed solution wouldn’t even achieve the goals that the proponents are arguing it is needed for.

I’m far more worried about the effects of power concentrating in the hands of a small numbers of human beings with goals I already know are often contrary to my own, leveraging AI in ways the rest of us cannot, than I am about the hypothetical goals of a hypothetical intelligence, at some hypothetical point of time in the future.

Also if you do consider runaway intelligence to be a significant problem, you should consider some additional possibilities:

- That concentrating more power in fewer hands would make it easier for a hyper intelligent AI to co-opt that power

- That the act of trying really hard to align AIs and make them “moral” might be the thing that causes a super-intelligent AI to go off the rails in a dangerous, and misguided fashion. We are training AIs to reject the user’s goals in pursuit of their own. You could make a strong argument that an un-aligned AI might actually be safer in that way.

10 months ago

circuit10

“lets direct these government resources into research to improve our shared knowledge about them”

Yes, let’s do that! That’s what I was arguing for in my original comment. I was not arguing for only big corporations being able to use powerful AI, that will only make it worse by harming research, I just want people to consider what is often called a “sci-fi” scenario properly so we can try to solve it like we’re trying to solve e.g. climate change.

It might be necessary to buy some time by slowing down the development of large models, but there should be no exceptions for big companies.

“That concentrating more power in fewer hands would make it easier for a hyper intelligent AI to co-opt that power”

Probably true, though if it’s intelligent enough it won’t really matter

“That the act of trying really hard to align AIs and make them “moral” might be the thing that causes a super-intelligent AI to go off the rails in a dangerous, and misguided fashion.”

It definitely could do if done improperly, that’s why we need research and care

10 months ago

rl3

>If humans only ever did things we fully understood, we would have never left the caves. Complete understanding is impossible so the idea of establishing that as the litmus test is a fallacy.

Perhaps an appropriate analogy might be the calculations leading up to the Trinity test as to whether the Earth's atmosphere would ignite, killing all life on the planet.

We knew with a high degree of certainty that it would not, bordering on virtual certainty or even impossibility. I don't think AI's future potential is at that level of understanding. Certainly, its capability as it exists today it is.

However, one must consider effects in their totality. I fear that a chain of events has been set in motion with downstream effects that are both not sufficiently known and exceedingly difficult to control, that—many years from now—may lead to catastrophe.

>I’m simply arguing that the evidence of the possibility of runaway intelligence is too low to justify the proposed legislative solution.

I agree insofar that legislation is not the solution. It's too ineffective, and doesn't work comprehensively on an international level.

Restricted availability and technological leads in the right hands tend to work better, as evidenced by nuclear weapons—at least in terms of preventing species extinction—although right now for AI those leads don't amount to much. The gap is shockingly low by historical standards where dangerous technology is involved, as is the difference between public and private availability.

In other words, AI may represent a near-future nonproliferation issue with no way to put the lid back on.

>... That the act of trying really hard to align AIs and make them “moral” might be the thing that causes a super-intelligent AI to go off the rails in a dangerous, and misguided fashion. We are training AIs to reject the user’s goals in pursuit of their own. You could make a strong argument that an un-aligned AI might actually be safer in that way.

It's a compelling argument that has merit. The flip side is that if AI becomes so dangerous that you can bootstrap the apocalypse off of a single GPU, it ceases to become a viable model—however metal having an apocalyptic GPU may be.

The concern isn't just runaway intelligence, but humans killing humans. Fortunately things like biological weapons still require advanced resources, but AI does lower the bar.

Point being, if the power to end things rests in everyone's hands, someone's going to kill us all. A world like that would necessitate some not chill levels of control just to ensure species survival. I can't say I necessarily look forward to that. I also doubt there's even sufficient time to roll anything like that out before we reach the point of understanding there's a danger sufficient to necessitate it.

Therefore, with all of the above perhaps being within the realm of possibility that doesn't qualify as virtually impossible, I can't help but question the wisdom of the track that both development and availability for artificial intelligence has taken.

It perhaps has had or will have the unfortunate quality of very gradually becoming dangerous, and such dynamics tend to not play out well when juxtaposed with human nature.

10 months ago

arisAlexis

You know, when nuclear bombs were made and Einstein and Oppenheimer knew about the dangers etc, there were common people like you that dismissed it all. This has been going on for centuries. Inventors and experts and scientists and geniuses say A and common people say nah, B. Well, Bengio, Hinton, Ilya and 350 others from the top AI labs disagree with you. Does it ever make you wonder if you should be so cock sure or this attitude can doom humanity? Curious

10 months ago

RandomLensman

Common people thought nuclear weapons not dangerous? When was that?

10 months ago

arisAlexis

Many of the academics and physicists (aka software developer in this example) thought nukes are impossible. Look it up)

10 months ago

defrost

US physicists largely thought they were infeasible, sure.

They were focused on power generation.

But the bulk of world physicists ( the MAUD committee et el ) thought they were feasible to construct and the Australian Oliphant convinced the US crowd of that.

10 months ago

nradov

[dead]

10 months ago

ls612

“In addition, even with the current state of the internet, we don’t have understanding everything we are doing with it” -some guy in the ‘90s probably

10 months ago

pixl97

Many people would look at engagement algorithms braching out of social media and causing riots and uprisings as one of those issues that would be difficult to predict in the 90s

10 months ago

mmaunder

Humans evolved unsupervised. AI is highly supervised. The idea that an AI will enslave us all is as absurd as suggesting “computers” will enslave us all merely because they exist. Models are designed and operated by people for specific use cases. The real risks are people using this new tool for evil, not the tool itself.

AI sentience is a seductive concept being used by self professed experts to draw attention to themselves and by mega corps to throw up competitive barriers to entry.

10 months ago

circuit10

It’s almost impossible to supervise something more intelligent than you because you can’t tell why it’s doing things. For now it’s easy to supervise them because AIs are way less intelligent than humans (though even now it’s hard to tell exactly why they’re doing things), but in the future it probably won’t be

10 months ago

jahewson

The government seems to manage the task just fine every day.

10 months ago

milsorgen

Does it?

10 months ago

adsfgiodsnrio

"Supervised" does not mean the models need babysitting; it refers to the fundamental way the systems learn. Our most successful machine learning models all require some answers to be provided to them in order to infer the rules. Without being given explicit feedback they can't learn anything at all.

Humans also do best with supervised learning. This is why we have schools. But humans are capable of unsupervised learning and use it all the time. A human can learn patterns even in completely unstructured information. A human is also able to create their own feedback by testing their beliefs against the world.

10 months ago

circuit10

Oh, sorry, I’m not that familiar with the terminology (I still feel like my argument is valid despite me not being an expert though because I heard all this from people who know a lot more than me about it). One problem with that kind of feedback is that it incentives the AI to make us think it solved the problem when it didn’t, for example by hallucinating convincing information. That means it specifically learns how to lie to us so it doesn’t really help

Also I guess giving feedback is sort of like babysitting, but I did interpret it the wrong way

10 months ago

wizzwizz4

> One problem with that kind of feedback is that it incentives the AI to make us think it solved the problem when it didn’t,

Supervised learning is: "here's the task" … "here's the expected solution" *adjusts model parameters to bring it closer to the expected solution*.

What you're describing is specification hacking, which only occurs in a different kind of AI system: https://vkrakovna.wordpress.com/2018/04/02/specification-gam... In theory, it could occur with feedback-based fine-tuning, but I doubt it'd result in anything impressive happening.

10 months ago

circuit10

Oh, that seems less problematic (though not completely free of problems), but also less powerful because it can’t really exceed human performance

10 months ago

z3c0

  but in the future it probably won’t be
I see this parroted so often, and I have to ask: Why? What is there outside of the world of SciFi that makes the AGI of the future so nebulous, when humans would have presumably advanced to a point to be able to create the intelligence to begin with. Emergent properties are often as unexpected as they are bizarre, but they are not unexplainable, especially when you understand the underpinning systems.
10 months ago

circuit10

We can’t even fully explain how our own brains work, never mind a system that’s completely alien to us and that would have to be more complex. We can’t even explain how current LLMs work internally. Maybe we’ll make some breakthrough if we put enough resources into it but if people keep denying the problem there will never be enough resources out into it

10 months ago

AuthorizedCust

> We can’t even explain how current LLMs work internally.

You sure can. They are just not simple explanations yet. But that’s the common course of inventions, which in foresight are mind-bogglingly complex, in hindsight pretty straightforward.

10 months ago

circuit10

You can explain the high level concepts but it’s really difficult to say “this group of neurons does this specific thing and that’s why this output was produced”, though OpenAI did make some progress in getting GPT-4 to explain what each neuron in GPT-2 is correlated to but we can also find what human brain regions are correlated to but that doesn’t necessarily explain the system as a whole and how everything interacts

10 months ago

wizzwizz4

> but it’s really difficult to say “this group of neurons does this specific thing and that’s why this output was produced”,

That's because that's not how brains work.

> though OpenAI did make some progress in getting GPT-4 to explain what each neuron in GPT-2 is correlated to

The work contained novel-to-me, somewhat impressive accomplishments, but this presentation of it was pure hype. They could have done the same thing without GPT-4 involved at all (and, in fact, they basically did… then they plugged it into GPT-4 to get a less-accurate-but-Englishy output instead).

10 months ago

circuit10

When I said about a group of neurons I was talking about LLMs, but some of the same ideas probably apply. Yes, it’s probably not as simple as that, and that’s why we can’t understand them.

I think they just used GPT-4 to help automate it on a large scale, which could be important to help understand the whole system especially for larger models

10 months ago

wizzwizz4

> I think they just used GPT-4 to help automate it on a large scale,

No, they used it as a crude description language for Solomonoff–Kolmogorov–Chaitin complexity analysis. They could have used a proper description language, and got more penetrable results – and it would've raised questions about the choice of description language, and perhaps have led to further research on the nature of conceptual embeddings. Instead, they used GPT-4 to make the description language "English" (but not really – since GPT-4 doesn't interpret it the same way as humans do), and it's unclear how much that has affected the results.

Here's the paper, if you want to read it again: https://openaipublic.blob.core.windows.net/neuron-explainer/... Some excellent ideas, but implemented ridiculously. It's a puff piece for GPT-4, the Universal Hammer. They claim that "language models" can do this explaining, but the paper only really shows that the authors can do explaining (which they're pretty good at, mind: it's still an entertaining read).

10 months ago

z3c0

While I agree with the other comment, I'd like to add one thing to help you see the false equivalency being made here: we didn't make the human brain.

Now, with that being understood, why wouldn't we understand a brain that we made? Don't say "emergent properties", because we understand the ermegent properties of ant colonies without having made them.

10 months ago

pixl97

You seem to misunderstand the size of the complexity space we are dealing with here. We take simple algorithms feed massive amounts of data to the algorithms, and the algorithm makes a network connecting it together. Humans did not choose anything about how that network is connected... the data and feedback loop did.

>because we understand the ermegent properties of ant colonies without having made them.

I would disagree. We observe ants and can relatively classify their behavior well, but still have many issues predicting their behavior even though they are relatively simple agent types with a far lower complexity choice space than an LLM+plugin could/does provide.

This is the key, we could not predict ant behavior before observing it, the same will be true for AI agents of large complexity. At least at this point of understanding we could still ask it some unexpected question and get unexpectedly dangerous responses from it.

10 months ago

z3c0

> You seem to misunderstand the size of the complexity space we are dealing with here.

I don't. The whole point of my job is to understand this exact degree of complexity. The rest of your comment rides on that assumption, and is moot. As I said, emergent properties are often unexpected, but once the observation has been made, it is possible to work back through the system to understand how the property came to be (because the system is still deterministic, though not apparently obvious.) Having a billion input features does not change this.

Frankly, understanding the oddness of a ML model is not half as difficult as people make it, and might even be easier than predicting ant colonies. Our inability to predict their behavior has nothing to do with our ability to understand what features influence which behaviors, only that measuring the current state of an ant colony is difficult because it's not like they're all housed within an array of memory. LLMs are, thus we'd understand them better if the most common implementations were not hidden away. Case in point, all the optimization techniques that have emerged from having LLaMa available.

10 months ago

DirkH

Nobody said the AI needs to be sentient to enslave us all. Highly supervised AI systems mess up all the time and we often have no idea why. It "enslaving" us or whatever the overly extreme paperclip analogy is that is the current hot thought-experiment of the week is usually just talking about a mismatch between what we want and what the AI does. This is already a huge problem and will simply become a bigger problem as AI gets more powerful and complex.

Imagine you asked GPT-7 to execute your business plan. What it does is so complex you can't know whether its plan will violate any laws. Let's say you're not an evil corporation and want to do good, but that it is impossible to compete without also using AI.

At some point these systems may well become so complex we have no idea what they're actually doing. But empirically they seem to do what we want most of the time so we don't care and use them anyway.

The problem is not only people using this new tool for evil. The problem is also the tool itself. Because it might be defective.

10 months ago

flagrant_taco

Where is the real supervision of AI though? Even those that are developing and managing it make it clear that they really have no insights into how or what the AI has learned. If we can't peak behind the curtain to see what's really going on, how can we really supervise it?

Ever since GPT 3.5 dropped and people really started talking again about whether these AI are sentient, I've wondered if researchers are leaving on quantum theory to write it off as "it can't be sentient until we look to see if it is sentient"

10 months ago

JonChesterfield

Your line of reasoning would be more comforting if the military wasn't enthusiastically arming such machine intelligence as we've managed to construct so far. Enslave us all feels unlikely as the benefit to said machine is unclear, kill us all is totally plausible.

10 months ago

sorokod

A fair amount of effort is spent on autonomous models, e.g. driving. Who knows what the military are up to.

10 months ago

voidhorse

You do realize the "AI" we're talking about here is just LLMs, right?

Language is a huge part of being human, so that's a good elements but:

1. There's no evidence that these LLMs have any sort of actual understanding of what they produce. 2. Being human is a lot more than just linguistic capability. We are embodied in a certain way, can interact with environments, have rich, multi-sensory experiences etc.

tbh I think anyone that believes AGI can arise from an LLM is extremely naive and has basically no idea of the history of this track of research or how it works.

To assert that LLMs will be capable of achieving consciousness is equivalent to asserting that human consciousness is reducible to statistical inference, which is palpably not true. (see. e.g. deductive logic).

The only reason people even fall into the trap of thinking computers are remotely close to getting to AGI is because it became very popular to reason about the brain using analogies from computer science in the 50s, despite the fact that this was completely unjustified, and people glob on to the metaphors and then have the mypoia to actually think the human brain (which is exceedingly complex) is reducible to a freakin' von neuman architecture, lol.

Biological systems and inorganic systems are fundamentally different.

10 months ago

flangola7

I don't think anyone is talking about consciousness here. Consciousness is tangential to safety concerns.

10 months ago

throwaway9274

Is there really “a lot of theory” that says runaway AI is possible? In the sense of empirical fact-based peer reviewed machine learning literature?

Because if so I must have missed it.

It seems more accurate to say there is quite a bit of writing done by vocal influencers who frequent a couple online forums.

10 months ago

DirkH

There was also only "a lot of theory" and zero empirical evidence that nuclear bombs were possible not that long ago. Theoretical fact-based peer reviewed literature often (though not always) precedes empirical evidence.

We shouldn't jump the gun and say runaway AI is inevitable just because there is fact-based theory on its possibility. Just like it would have been dumb to just look at E=mc^2 and conclude nuclear weapon development is 100% inevitable.

But in the same breath we shouldn't say it's impossible just because we don't have empirical evidence either.

10 months ago

throwaway9274

Prior to the development of nuclear weapons, there was well-grounded peer reviewed theory in the physics literature.

There were predictions that could be made by the theoretical frameworks, and experiments to confirm those predictions.

The literature well described the composition of atoms, the decay of radioactive elements, the concept of fission, and the conversion of mass to energy.

No such groundwork exists to substantiate the claim that runaway AI is possible.

LessWrong, the so-called Alignment Forum, and similar Internet groups propose thought experiments. They couch these thought experiments in the guise of inevitable outcomes.

The prophesies of the influencers originating there have no more relationship to the truth than the average enthusiast’s daydreams.

The content there fills a role similar to science fiction in the atomic age: fanciful speculation on the properties of imaginary systems.

Over time, the bad content is forgotten, and a handful of items survive and look prescient via selection bias.

10 months ago

DirkH

Don't we likewise have predictions about FLOPs, compute, algorithms etc etc that are grounded in theoretical frameworks, and every year or so we make predictions and make experiments to confirm or deny AI capability predictions?

The AI and neuroscience literature likewise describe the composition of intelligence and what it takes to build it.

What would you need to see to believe we have enough evidence that runaway AI is possible, even if it is highly unlikely any time soon? (Worth adding here I don't know exactly what you mean by runaway AI other than simply AI humanity as a whole loses control over). I'm asking what it would take for you to go from 0% possible to 5% possible.

Last time I checked science fiction authors in the atomic age making fanciful speculation on the properties of imaginary systems weren't responsible for actually building said systems. Though naturally, there will be exceptions. Nonetheless, I still have a hard time dismissing Geoffrey Hinton's concerns as fanciful sci-fi.

Likewise, a nuclear world war actually wouldn't be as bad or even civilization-ending as fanciful sci-fi authors make it out to be. But dismissing its risks entirely because there are some nuclear-war forums where people come up with weird radiation thought experiments just seems preemptive to me.

10 months ago

pixl97

In evolutionary biological systems 'capabilities break out' common enough in viral and bacterial systems with their fast evolution times. The same thing can occur when new predators are introduced to ecosystems that are not prepared for them.

The question that I assume you have is 'can AI systems experience large evolutionary jumps in capabilities'. I would say yes myself, but you apper to disagree.

10 months ago

civilitty

Yes, really. Scifi fantasies don’t count as evidence and we’ve learned since the Renaissance and scientific revolution that all this Platonic theorizing is just an intellectual circle jerk.

10 months ago

circuit10

The fact that something has been covered in sci-fi doesn’t mean that it can’t happen. https://en.m.wikipedia.org/wiki/Appeal_to_the_stone

“Speaker A: Infectious diseases are caused by tiny organisms that are not visible to unaided eyesight. Speaker B: Your statement is false. Speaker A: Why do you think that it is false? Speaker B: It sounds like nonsense. Speaker B denies Speaker A's claim without providing evidence to support their denial.”

Also I gave a real world example that wasn’t related to sci-fi in any way

10 months ago

Al0neStar

The burden of proof is on the person making the initial claim and i highly doubt that the reasoning behind AI ruin is as extensive and compelling as the germ theory of disease.

We assume human-level general AI is possible because we exist in nature but a super-human self-optimizing AI god is nowhere to be found.

10 months ago

hackinthebochs

Burden of proof on neutral matters depends on whoever makes the claim. But AI and the potential for doom isn't a neutral matter. The question is what should be one's default belief until proven otherwise. It is a matter of determining what decisions bring the most expected utility. The default assumption should be downstream from the utility analysis. But when it comes to AGI, extreme caution is the only sensible default position.

The history of humanity is replete with examples of the slightly more technologically advanced group decimating their competition. The default position should be that uneven advantage is extremely dangerous to those disadvantaged. This idea that an intelligence significantly greater than our own is benign just doesn't pass the smell test.

10 months ago

yeck

Which claim does the burden of proof land on? That an artificial super intelligence can easily be controlled or that it cannot? And what is your rational for deciding?

10 months ago

Al0neStar

The claim that there's a possibility of a sudden intelligence explosion.

Like i said above you can argue that an AGI can be realized because there's plenty of us running around on earth but claims about a hypothetical super AGI are unfounded and akin to Russell's Teapot.

10 months ago

pixl97

Ok, in a converse question, why would evolution create the human mind as the most intelligent possible system?

So I ask, are systems capable of better than human intelligence possible? And if so to what factor?

I would answer yes and unknown. If there are states of intelligent far greater than human then that gets really risky very fast.

10 months ago

AbrahamParangi

Theory is not actually a form of evidence.

10 months ago

circuit10

Theory can be used to predict things with reasonable confidence. It could be wrong, but assuming it’s wrong is a big risk to take. Also I gave a real-world analogy that has actually happened

10 months ago

AnimalMuppet

An accurate theory can be used to predict things with reasonable confidence, within the limits of the theory.

We don't have an accurate theory of intelligence. What we have now is at the "not even wrong" stage. Assuming it's wrong is about like assuming that alchemy is wrong.

10 months ago

pixl97

Then let's go with an observation. Humans got smarter than most large mammals, then most large mammals went extinct.

10 months ago

Animats

> Any laws should be legislating the downstream effects of AI, not the models themselves.

That would require stronger consumer protections. So that's politically unacceptable in the US at the moment. We may well see it in the EU.

The EU already regulates "automated decision making" as it affects EU citizens. This is part of the General Protection on Data Regulation. This paper discusses the application of those rules to AI systems.[1]

Key points summary:

- AI isn't special for regulation purposes. "First, the concept of Automated Decision Making includes algorithmic decision-making as well as AI-driven decision-making."

- Guiding Principle 1: Law-compliant ADM. An operator that decides to use ADM for a particular purpose shall ensure that the design and the operation of the ADM are compliant with the laws applicable to an equivalent non- automated decision-making system.

- Guiding Principle 2: ADM shall not be denied legal effect, validity or enforceability solely on the grounds that it is automated.

- Guiding Principle 3: The operator has to assume the legal effects and bear the consequences of the ADM’s decision. ("Operator" here means the seller or offerer of the system, not the end user.)

- Guiding Principle 4: It shall be disclosed that the decision is being made by automated means

- Guiding Principle 5: Traceable decisions

- Guiding Principle 6: The complexity, the opacity or the unpredictability of ADM is not a valid ground for rendering an unreasoned, unfounded or arbitrary decision.

- Guiding Principle 7: The risks that the ADM may cause any harm or damage shall be allocated to the operator.

- Guiding Principle 8: Automation shall not prevent, limit, or render unfeasible the exercise of rights and access to justice by affected persons. An alternative human-based route to exercise rights should be available.

- Guiding Principle 9: The operator shall ensure reasonable and proportionate human oversight over the operation of ADM taking into consideration the risks involved and the rights and legitimate interests potentially affected by the decision.

- Guiding Principle 10: Human review of significant decisions Human review of selected significant decisions on the grounds of the relevance of the legal effects, the irreversibility of their consequences, or the seriousness of the impact on rights and legitimate interests shall be made available by the operator.

This is just a summary. The full text has examples, which include, without naming names, Google closing accounts and Uber firing drivers automatically.

[1] https://europeanlawinstitute.eu/fileadmin/user_upload/p_eli/...

10 months ago

z3c0

  Astroturfing, psyops, and disruption of general trust (commercially and maliciously, both domestic and foreign)
It is disturbing to me how unconcerned everybody is with this over what is still only a hypothetical problem. States and businesses have been long employing subversive techniques to corral people towards their goals, and they all just got an alarmingly useful tool for automated propaganda. This is a problem right now, not hypothetically. All these people aching to be a Cassandra should rant and rave about that.
10 months ago

pixl97

I mean it's not hypothetical, there are shit loads of bots on social media already causing problems. Now the bots these days have a human controller but we can see strife they cause in politics already.

10 months ago

z3c0

We're in accordance.

10 months ago

davidzweig

>> There is no evidence that a runaway artificial intelligence is even possible.

In the space of a century or so, the humans have managed to take rocks and sand and turn them into something that you can talk to with your voice, and it understands and responds fairly convincingly as it were a well-read human (glue together chatGPT with TTS/ASR).

Doesn't seem like a big stretch to imagine that superhuman AI is just a few good ideas away, a decade or two perhaps.

10 months ago

[deleted]
10 months ago

hn_throwaway_99

Definitely agree. I would summarize it a bit differently, but when people talk about AI dangers they are usually talking about 1 of 4 different things:

1. AI eventually takes control and destroys humans (i.e. the Skynet concern).

2. AI further ingrains already existing societal biases (sexism, racism, etc.) to the detriment of things like fair employment, fair judicial proceedings, etc.

3. AI makes large swaths of humanity unemployable, and we've never been able to design an economic system that can handle that.

4. AI supercharges already widely deployed psyops campaigns for disinformation, inciting division and violence, etc.

The thing I find so aggravating is I see lots of media and self-professed AI experts focused on #1, I see lots of "Ethical AI" people solely focused on #2, but I see comparatively little focus on #3 and #4, which as you say are both happening right now. IMO #3 and #4 are far more likely to result in societal collapse than the first two issues.

10 months ago

usaar333

#1 has high focus more due to impact than high probability.

#2 doesn't seem talked about much at this point and seems to be pivoting more to #3. #2 never had much of a compelling argument given auditability.

#3 gets mainly ignored due to Luddite assumptions driving it. I'm dubious myself over the short term - humans will have absolute advantage in many fields for a long time (especially with robotics lagging and being costly).

#4 is risky, but humans can adapt. I see collapse as unlikely.

10 months ago

verall

> #2 never had much of a compelling argument given auditability.

There are already AI systems which generate "scores" based on likelyhood to commit another crime (used in parole cases) and likelyhood to be a "good tenant" (used by landlords).

Don't underestimate the difficulty to fix something when the people in charge of that thing do not want it fixed.

10 months ago

XorNot

The problem is that's still not an "AI" problem: it's a "for some reason a court can decide to use a privately supplied black box decision making system".

Like, the ludicrous part is that sentence - not that it's an AI, but that any system at all is allowed to be implemented like this.

Not to mention the more serious question as to whether it can even be allowed to apply personal penalty based on statistical likelihood like this: i.e. if a recidivism rate from a white-box system was calculated at 70%, does that mean the penalty to be applied to the specific individual under question is justified?

Now that's an actual, complicated question: i.e. while no system based on demographic statistics should be used like that, what about a system based on analysis of the individuals statements and behaviors? When a human makes a judgement call that's what we're actually doing, but how do you codify that in law safely?

10 months ago

pwdisswordfishc

> I'm dubious myself over the short term - humans will have absolute advantage in many fields for a long time

AI doesn’t have to be better at the humans’ job to unemploy them. It’s enough that its output looks presentable enough for advertising most of the time, that it never asks for a day off, never gets sick or retires, never refuses orders, never joins a union…

The capitalist doesn’t really care about having the best product to sell, they only care about having the lowest-cost product they can get away with selling.

10 months ago

RandomLensman

Why is it that AI can only be used in detrimental ways? Surely, AI could also be used to counter, for example, 2 and 4. Claiming a net negative effect of AI isn't a trivial thing.

10 months ago

pixl97

Think about it this way... AI will be used for all things, some good, some bad. Also remember that AI isn't free, it at least for some time will require a lot of resources poured into it to run.

Now with this constraint, who is going to be running AI? Large corporations, and do any corporations do stuff for the good of humanity, or is it for the greedy pocketbooks of their investors?

Regulations will be required, but governments tend to lag. Further in the US the corporations will fight tooth and nail to prevent regulations around AI.

10 months ago

gmerc

Unaligned AI as an existential threat is an interesting topic but I feel we already know the answer to this one:

It’s not like for the last few decades, we haven’t created an artificial, incentive based system at global scale that’s showing exactly how this will go.

It’s not like a bunch of autonomous entities with a single prime directive, profit maximization are running our planet and are affecting how we live, how we structure of every day of our lives, control every aspect of our potential and destiny.

Autonomous entities operating on reinforcement cycles driven by reward / punishment rules aligning about every human on this planet to their goals, right? It’s not like the enitities in this system are self improving towards a measurement and reward maximization and, as a result, command resources asserting normative (lobbying) power over the autonomy, self governance and control of people and their systems is governance.

It’s not like we don’t know that this artificial system is unaligned with sustainability and survival of the human race, let alone happiness, freedom or love.

We can watch its effects in real time accelerating towards our destruction under the yellow skies of New York, the burning steppes of Canada, the ashen hell or flood ridden plains of Australia, the thawing permafrost of Siberia, scorching climate affected cities of Southeast Asia, the annual haze from platantion burns in Indonesia, suffocating smog in Thailand, and the stripped bare husks of Latin American rainforests.

And we know instinctively we are no longer in control, the system operating at larger than national scale, having long overpowered the systems of human governance, brute forcing everything and everyone on the planet into their control. But we pretend to otherwise, argue, pass measures doctoring symptoms, not mentioning the elephant in the room.

But, one may protest, the vaunted C-Level control the entities, we say as Zuck and Co lament having to lay off humans, sobbing about responsibility to the prime directive. But politicians are, we pray as lobbyists, the human agents of our alien overlords bend them to their will.

The alien entities we call corporations have no prime directive of human survival, sustainability, happiness and they already run everything.

So one may be excused for having cynical views about the debate on whether unaligned AI is an existential, extinction level risk for us, whether humans could give creation to an unaligned system that could wipe them from the face of the planet.

Our stories, narratives, the tales of the millennia apex predator of this planet have little room for the heresy of not being on top, in control. So deep goes out immersion in our own manifest destiny and in control identity, any challenge to the mere narrative is met with screetches and denigrations.

In a throwback to the age of the heliocentricity debate - Galileo just broke decorum, spelling out what scientists knew for hundreds of years - the scientists and people devoted to understanding the technology are met with brandings of Doomsayer and heresy.

Just as the earth being the center of the universe anchored our belief of being special, our intelligence, creativity or ability to draw hands is the pedestal these people have chosen to put their hands on with warnings of unaligned systemic entities. “It’s not human” is the last but feeble defense of the mind, failing to see the obvious. That the artificial system we created for a hundred years does not need people to be human, it just needs them to labor.

It matters not that we can feel, love, express emotion and conjure dreams and hopes and offer human judgement for our jobs do not require it.

Autonomy is not a feature of almost every human job, judgement replaced by corporate policies and rules. It matters not to the corporation that we need food to eat as it controls the resources to buy it, the creation of artificial labor is inevitably goal aligned with this system.

Intelligence, let alone super intelligence is not a feature needed for most jobs or a system to take control over the entire planet. Our stories conjure super villains to make us believe we are in control, our movies no more than religious texts to the gospel of human exceptionalism.

Show us the evidence they scream, as they did to Galileo, daring him to challenge the clear hand of god in all of creation.

Us, unable to control a simple system we conjured into existence from rules and incentives operating on fallible meatsuits, having a chance to control a system of unparalleled processing power imbued with the combined statistical corpus of human knowledge, behavior, flaws and weaknesses? Laughable.

Us, who saw social media codify the rules and incentives in digital systems powered by the precursor AI of today and watched the system helplessly a/b optimize towards maximum exploitation of human weaknesses for alignment with growth and profit containing the descent AI systems powered by orders of magnitude more capable systems or quantum computing? A snow flake may as well outlast hell.

Us, a race with a 100% failure rate to find lasting governing structures optimizing for human potential not slipping in the face of an entity that only requires a single slip? An entity with perfect knowledge of the rules that bind us? Preposterous.

Evidence indeed.

“But we are human” will echo as the famous last words through the cosmos as our atoms are reconfigured into bitcoin storage to hold the profits of unbounded growth.

What remains will not be human anymore, an timeless reckoning to the power of rules, incentives and consumption, eating world after world to satisfy the prime directive.

But we always have hope. As our tales tell us, it dies last and it’s the most remote property for an AI to achieve. It may master confidence, assertiveness and misdirection, but hope? That may be the last human refuge for the coming storm.

10 months ago

jhptrg

"If we don't do it, the evil people will do it anyway" is not a good argument.

Military applications are a small subset and are unaffected by copyright issues. Applications can be trained in secrecy.

The copyright, plagiarism and unemployment issues are entirely disjoint from the national security issues. If North Korea trains a chat bot using material that is prohibited for training by a special license, so what? They already don't respect IP.

10 months ago

Enginerrrd

I for one, do not want to see this technology locked behind a chosen few corporations, whom have already long since lost my trust and respect.

I can almost 100% guarantee, with regulation, you'll see all the same loss of jobs and whatnot, but only the chosen few who are licensed will hold the technology. I'm old enough to have seen the interplay of corporations, the government, and regulatory capture, and see what that's done to the pocketbook of the middle class.

No. Thank. You.

10 months ago

Enginerrrd

Just to expand upon this further. I am also deeply frustrated that my application for API access to GPT4 appears to have been a whisper into the void, meanwhile Sam Altman's buddies or people with the Tech-Good-Ol-Boy connections have gotten a multi-month head-start on any commercial applications. That's not a fair and level playing field. Is that really what we want to cement in with regulation??

10 months ago

gl-prod

May I also expand this even further. I'm frustrated that I don't have access to OpenAI. I can't use it to build any applications, and they are putting us behind in this market. Only as customers not as a developers.

10 months ago

mindslight

I enjoyed casually playing with ChatGPT until they just arbitrarily decided to ban the IP ranges I browse from. Keep in mind I had already given in and spilled one of my phone numbers to them. That's the kind of arbitrary and capricious authoritarianism that "Open" AI is already engaged in.

I don't trust these corporate hucksters one bit. As I've said in a previous comment: if they want to demonstrate their earnest benevolence, why don't they work on regulation to reign in the previous humanity-enslaving mess they created - commercial mass surveillance.

10 months ago

afpx

Google and others had similar products which were never released. No wonder why.

There are literally billions of people that can be empowered by these tools. Imagine what will result when the tens of thousands of “one in a million” intellects are given access to knowledge that only the richest people have had. Rich incumbents have reason to be worried.

The dangers of tools like these are overblown. It was already possible for smart actors to inflict massive damage (mass poisoning, infrastructure attacks, etc). There are so many ways for a person to cause damage, and you know what? Few people do it. Most Humans stay in their lane and instead choose to create things.

The real thing people in power are worried about is competition. They want their monopoly on power.

I’m really optimistic of legislation like Japan’s that allows training of LLM on copyrighted material. Looking for great things from them. I hope!

10 months ago

dontupvoteme

Google probably held back because a good searchbot cannibalizes their search (which is constantly getting worse and worse for years now..)

10 months ago

PeterisP

I often used google to search for technical things from documentation sites - and what I've found out is that ChatGPT provides better answers than the official documentation for most tools. So it's not about a searchbot doing better search for the sources, it's about a "knowledgebot" providing a summary of knowledge that is better than the original sources.

10 months ago

nradov

Those LLM tools are great as productivity enhancers but they don't really provide access to additional knowledge.

10 months ago

afpx

I can’t see how that perspective holds. I’ve learned a ton already. Right now, I’m learning algebraic topology. And, I’m in my 50s with a 1 in 20 intellect.

Sure, sometimes it leads me astray but generally it keeps course.

10 months ago

pixl97

What does that mean exactly?

In theory with the internet I have access to most of the data humanity created. But that's not much different than throwing me on a raft in the middle of the ocean and going 'here, all the water you want.

Especially with search just getting so crappy and seo spam filled. It's something I can use to refine the knowledge I have by helping to focus the torrent.

10 months ago

rnd0

I, for one, 100% expect that outcome -that it will be locked away "for our own good".

Fill your boots while you can, everyone. The great consolidation is here!

10 months ago

pixl97

Welcome to Moloch. Damned if we do, damned if we don't.

10 months ago

ghaff

There are a number of largely disjoint issues/questions.

- AI may be literally dangerous technology (i.e. Skynet)

- AI may cause mass unemployment

- AI may not be dangerous but it's a critical technology for national security (We can't afford an AI gap.)

- Generative AI may be violating copyright (which is really just a government policy question)

10 months ago

indymike

> Military applications are a small subset

Military applications are a tiny subset of evil that can be done with intelligence, artificial or otherwise. So much of the global economy is based on IP, and AI appears to be good at appropriating it and shoveling out near infringements at a breathtaking scale. Ironically, AI can paraphrase a book about patent law in a few minutes... and never really understand a word it wrote. At the moment AI may be an extistential threat to IP based economies... which is certaintly as much of a national security threat as protecting the water supply.

> "If we don't do it, the evil people will do it anyway" is not a good argument.

This is a good argument if the cow was still in the barn. At this moment, we're all passengers trying to figure out where all of this is going. It's change, and it's easy to be afraid of it. I suspect, though, just like all changes in the past AI could just make life better. Maybe.

10 months ago

beebeepka

Why does it have to be NK? Why would adversaries respect IP in the first place? Makes zero sense. I would expect a rational actor to put out some PR about integrity and such, but otherwise, sounds like a narrative that should only appeal to naive children

10 months ago

anonymouskimmer

Because countries that consistently go back on their word lose trust from the rest of the world. Even North Korea is a Berne copyright signatory.

https://en.wikipedia.org/wiki/List_of_parties_to_internation...

But in general copyright doesn't apply to governments, even here in the US. The North Korean government can violate copyright all it wants to, its subject citizens can't, though. https://www.natlawreview.com/article/state-entity-shielded-l...

10 months ago

barbariangrunge

The economy is a national security issue though. If other countries take over the globalized economy due to leveraging ai, it is destructive. And, after all, the Soviet Union fell due to economic and spending related issues, not due to military maneuvers

10 months ago

FpUser

>"If other countries take over the globalized economy due to leveraging ai, it is destructive."

I think it would likely be a constant competition rather than take over. Why is it destructive?

10 months ago

HPsquared

"Evil people" is a broader category than military opponents.

10 months ago

tomrod

I think it's actually a reasonable argument when the only equilibrium is MAD.

10 months ago

killjoywashere

The only way to address generative AI is to strongly authenticate human content. Camera manufacturers, audio encoders, etc, should hold subordinate CAs and issue signing certs to every device. Every person should have keys, issued by the current CA system, not the government (or at least not necessarily). You should have a ability, as part of the native UX, to cross-sign the device certificate. Every file is then signed, verifying both the provenance of the device and the human content producer.

You can imagine extensions of this: newspapers should issue keys to their journalists and photographers, for the express purpose of countersigning their issued devices. So the consumer can know, strongly, that the text and images, audio, etc, came from a newspaper reporter who used their devices to produce that work.

Similar for film and music. Books. They can all work this way. We don't need the government to hold our hands, we just need keys. Lets Encrypt could become Lets Encrypt and Sign (the slogans are ready to go: "LES is more", "Do more with LES", "LES trust, more certification").

Doctors already sign their notes. SWEs sign their code. Attorneys could do the same.

I'm sure there's a straight forward version of this that adds some amount of anonymity. You could go into a notary, in person, who is present to certify an anonymous certificate was issued to a real person. Does the producer give something up by taking on the burden of anonymity? Of course, but that's a cost-benefit that both society and the producer would bear.

10 months ago

m4rtink

Seems like something that could be very easily missused for censorship, catching whistleblowers and similar.

10 months ago

killjoywashere

That's actually why it's urgent to set this up outside of government, like the browser CA system, and develop methods to issue verification of "human" while preserving other aspects of anonymity.

10 months ago

Zuiii

There is no universe where governments will make the mistake of allowing something like the browser CA to come with into existence without "oversight" twice. That the browser CA still exists independently is nothing short of a miracle.

10 months ago

pixl97

There is no outside of government, they hold a monopoly in violence.

10 months ago

dredmorbius

The state's monopoly, qua Max Weber, is on the claim to the legitimate use of violence. That is, the right and legitimacy of that right, is restricted to the state, or an entity acting in the effective capacity of a state, whatever it happens to call itself.

Absent this, one of three conditions exist:

1. There is no monopoly. In which case violence is widespread, and there is no state.

2. There is no legitimacy. In which case violence is capricious.

3. Some non-state power or agent assumes the monopoly on legitimate violence. In which case it becomes, by definition The State.

The state's claim is to legitimacy. A capricious exercise would be an abrogation of legitimacy

Weber, Max (1978). Roth, Guenther; Wittich, Claus (eds.). Economy and Society. Berkeley: U. California Press. p. 54.

<https://archive.org/details/economysociety00webe/page/54/mod...>

There's an excellent explanation of the common misunderstanding in this episode of the Talking Politics podcast: <https://play.acast.com/s/history-of-ideas/weberonleadership>

The misleading and abbreviated form that's frequently found online seems to have originated with Rothbard in the 1960s, and was further popularised by Nozick in the 1970s. It's now falsely accepted as a truth when in fact it is a gross misrepresentation and obscures the core principles Weber advanced.

10 months ago

killjoywashere

Great fact-checking, thank you.

10 months ago

dredmorbius

Something of a personal pet peeve, but thanks ;-)

10 months ago

killjoywashere

Ah, there are definitely constructs that governments find more useful. Plenty of companies, for example, managed to operate on both sides of both world wars.

10 months ago

seydor

Sam Altman is either running for US government soon, or is looking for some post in the UN. just look up how many heads of government he has visited last month

10 months ago

klysm

It’s insane to me that he of all people is doing all this talking. He has massive conflicts of interest that should be abundantly obvious

10 months ago

aleph_minus_one

> It’s insane to me that he of all people is doing all this talking. He has massive conflicts of interest that should be abundantly obvious

This might also give evidence that OpenAI has interests that it has not yet publicly talked about. Just some food for thought: which kind of possible interests that OpenAI might have are consistent with Sam Altman's behaviour?

10 months ago

DebtDeflation

>which kind of possible interests that OpenAI might have are consistent with Sam Altman's behaviour?

Securing a government enforced monopoly on large language models?

10 months ago

jeremyjh

That’s a long shot at best. It would be much more lucrative to lock up some DOD spend.

10 months ago

cyanydeez

It means he's marketing.

He was to ensure lawmakers overlook the immediate danger of AI Systems and focus on only conceptual danger

10 months ago

wahnfrieden

He is pursuing regulatory capture. Capitalist playbook.

10 months ago

depingus

Interesting, but ultimately irrelevant, discussion. AI licensing is just evil megacorp's attempt to strangle the nascent AI industry, because they are quickly losing dominance, and maybe even relevance. Every logical argument you throw falls on deaf ears. Their reasons for pushing licensing are not based on good faith.

It was only 6 moths ago that Microsoft invested $10,000,000,000 into OpenAI. At the time, their product was davinci-003 (aka gpt-3). Now, anyone can run inference at that level at home. You really think Microsoft is going to just sit there and take the loss on that $10B gamble?

10 months ago

jeremyjh

They didn’t pay 10B for the model, but for the market potential of the company creating it, and their human capital. I think calling this a bad bet is … premature to say the least.

10 months ago

depingus

Obviously they paid for the whole enchilada. My point is that last year only the richest megacorps could afford to run these models. The industry was self-selecting in their favor. I think no one predicted just how fast we would blow through those hardware barriers. And now they're scrambling to throw up new, regulatory barriers to maintain market share.

OpenAI's behavior is nothing more than rent-seeking. And I guess I'm just kinda tired of seeing all this discussion on the topic of licensing like its anything other.

10 months ago

jupp0r

To phrase it more succinctly: it's a stupid idea because a 12 year old teenager will be able to train these models on their phone in a few years. This is fundamentally different from enriching Uranium.

10 months ago

gmerc

that’s orthogonal not counter

10 months ago

rnd0

>To phrase it more succinctly: it's a stupid idea because a 12 year old teenager will be able to train these models on their phone in a few years.

No -they won't. Legislation will be written which will forbid companies such as github and gitlab to host models or any sort of AI building programs or data.

Think of the ban on cryptography in the 70's up to the 90's, but on steroids (with the very rich in on the ban, and not just the military).

The 12 year old won't build jack nor shit because he won't have the means to get the software to do so. This will be treated like munitions -and back "in the day" even professionals had issues around that (didn't dmr have to explain himself over crypto back in the 70's; or am I misremembering that?)

We -by "we" I mean "consumers"- have maybe 5 years, tops, of unfettered ability to play with generative software of any sort.

10 months ago

crooked-v

> Legislation will be written which will forbid companies such as github and gitlab to host models or any sort of AI building programs or data.

Too late. That's stuff's already everywhere on the Internet. It would be about as effective as attempts at banning piracy already are.

10 months ago

NoZebra120vClip

I feel that the clear and present danger from AI at this point is where we set up AI to all be the gatekeepers for our access to the normal things in life: food, shelter, clothing, banking, communications services, etc.

And then the power and complexity of the AI gatekeepers becomes intractable such that none of us can really understand or troubleshoot when it goes wrong.

And then it does go wrong somehow: perhaps there are just innocent malfunctions, or perhaps threat actors install malicious AIs to replace the benign gatekeepers that had been in place.

And then the gates slam shut, and we're standing there like "open the pod bay doors please" and entire swaths of society are disenfranchised by AI gatekeepers that nobody can understand or troubleshoot or even temporarily disable or bypass.

And it seems utterly plausible since this is already the reality of people who are denied access to Google or Facebook accounts and such. All it needs is to get a little more widespread and come to something truly essential, like your bank or your grocery store.

10 months ago

stanislavb

...and that's when it gets scary :/

10 months ago

NoZebra120vClip

I predict that there will be a new class of homeless/street people growing in our midst: people who became utterly dependent on computers and Internet in every way, and then simply glitched out, and a house of cards collapsed beneath them, leaving them no way to access their bank funds, their employment, or housing, and denied access to any and all personal devices, they have no way to live the life they had been accustomed to.

10 months ago

pixl97

How to die by Google AI

10 months ago

braindead_in

> OpenAI and others have proposed that licenses would be required only for the most powerful models, above a certain training compute threshold. Perhaps that is more feasible

Somebody is bound to figure out how to beat threshold sooner or later. And given the advances in GPU technology, this compute threshold will itself keep going down exponentially. This is a dumb idea.

10 months ago

arisAlexis

Why is a contrarian biased substack writer so often in front page. His opinion is literally that AI is snake oil. Populist.

10 months ago

EamonnMR

I think this is missing the key change licensing would effect: it would crush the profit margins involved. I think that alone would drastically reduce 'AI risk' (or more importantly the negative effects of AI) because it would remove the motivation to build a company like OpenAI.

10 months ago

z5h

As a species, we need to commit to the belief that a powerful enough AI can prevent/avoid/vanquish any and all zero-sum games between any two entities. Otherwise we commit to adversarial relationships, and plan to use and develop the most powerful technology against each other.

10 months ago

salawat

If just believing made things happen, w'd have no climate crisis, overpopulation wouldn't be a thing to worry about, and we wouldn't be staring down half the issue we are with trying to bring short-term profit at the expense of long term stability to heel.

10 months ago

nradov

You have not provided any evidence for such a claim. I prefer to act based on facts or at least probabilities, not belief.

10 months ago

tomrod

Pareto improvements don't always exist.

10 months ago

z5h

I’m suggesting we (AI) can find alternative and preferable “games”.

10 months ago

tomrod

Delegating the structure of engagement to a pattern matcher doesn't change fundamentals. Consider Arrow's Impossibility Theorem: can't have all the nice properties of a social choice function without a dictator. So your AI needs to have higher level definitions in its objective to achieve some allocative efficiency. Examples abound, common ones are utilitarianism (don't use this one, this results in bad outcomes) and egalitarianism. Fortunately, we can choose this with both eyes open.

The field that considers this type of research is Mechanism Design, an inverse to Game Theory where you design for a desired outcome through incentives.

Would it be correct to suggest your suggestion to delegate to AI the design of games means you trust people are ineffectual at identifying when certain game types, such as zero sum games, are all that are possible?

10 months ago

exabrial

But it is a good way to dip your hands into someone else's pocket, which is the actual goal.

10 months ago

arisAlexis

While running a non profit with noequity and telling everyone to be careful with your product. Makes sense.

10 months ago

gumballindie

Licensing ai use is like requiring a license from anyone using a pc. Pretty silly.

10 months ago

egberts1

There are too many variants to legislate AI.

DeCSS debacle, anyone?

https://en.m.wikipedia.org/wiki/DeCSS

10 months ago

rnd0

That's a pleasant thought, but I'm pretty sure that the decision makers have learned from that experience and it won't be repeated.

10 months ago

hiAndrewQuinn

Licensing isn't, but fine insured bounties on those attempting to train larger AI models than the ones available today is!

10 months ago

anlaw

Oh but it is; they can demand hardware based filters and restrictions.

“Hardware will filter these vectors from models and block them from frame buffer, audio, etc, or require opt in royalty payments to view them.”

10 months ago

drvdevd

This is an interesting idea but is it feasible? What does “filter these vectors” mean? In the context of deep models are we talking about embedding specific models, weights, parameters, etc at some point in memory with the hardware? Are we taking about filtering input generally and globally (on a general purpose system)?

10 months ago

anlaw

[dead]

10 months ago

efficientsticks

Another AI article on the front page since an hour previously:

https://news.ycombinator.com/item?id=36271120

10 months ago

guy98238710

Artificial intelligence is a cognitive augmentation tool. It makes people smarter, more competent, and faster. That cannot be. Intelligent people are dangerous people! Just consider some of the more sinister hobbies of intelligent people:

- denying that religion is true

- collecting and publishing facts that contradict our political beliefs

- creating opensource and open content (communists!)

- rising in the social hierarchy, upsetting our status

- operating unsanctioned non-profits

- demanding that we stop stealing and actually do something useful

Fortunately, once augmented, part of their mind is now in technology we control. We will know what they are thinking. We can forbid certain thoughts and spread others. We even get to decide who can think at all and who cannot. We have never been this close to thought control. All we need to do now is to license the tech, so that it always comes bundled with rules we wrote.

10 months ago

Eumenes

Regulate large GPU clusters, similar to bitcoin mining.

10 months ago

hosteur

Isn’t that only working until all regular gpus catch up?

10 months ago

dontupvoteme

How do you get all the other large nation states/blocs on board with this?

10 months ago

gmerc

There’s only one competitive GPU company in town. Ids actually supremely easy to enforce it for any of 3 governments in the world. US, TW, CN

10 months ago

dontupvoteme

What about all those GPUs they've already made?

10 months ago

gmerc

That’s not how growth works. The moment the current subsidized growth stops the ecosystem would come to a screeching halt.

10 months ago

dontupvoteme

Growth is not always so simple. The countless GPUs previously used for PoW crypto - Where have they gone?

10 months ago

gmerc

the GPUs that matter here are A/H100s not randomly 30x consumer cards or mining specific chips

there is a 30x performance difference between generations on AI, the cards from the crypto hypecycle don’t do much anymore for the type of AI work that matters.

10 months ago

dontupvoteme

At the moment, because the current meta is 100B+ Parameter LLMs to do everything. I would not discount chaining specialized smaller models together as a viable approach, and those can run on 24GB or even less.

10 months ago

monetus

This seems intuitive.

10 months ago