Google drops pledge not to use AI for weapons or surveillance

617 points
1/21/1970
12 days ago
by jbegley

Comments


eric59

12 days ago

a_shovel

I initially thought that this was an announcement for a new pledge and thought, "they're going to forget about this the moment it's convenient." Then I read the article and realized, "Oh, it's already convenient."

Google is a megacorp, and while megacorps aren't fundamentally "evil" (for some definitions of evil), they are fundamentally unconcerned with goodness or morality, and any appearance that they are is purely a marketing exercise.

12 days ago

Retric

> while megacorps aren't fundamentally "evil" (for some definitions of evil),

I think megacorps being evil is universal. It tends to be corrupt cop evil vs serial killer evil, but being willing to do anything for money has historically been categorized as evil behavior.

That doesn’t mean society would be better or worse off without them, but it would be interesting to see a world where companies pay vastly higher taxes as they grow.

12 days ago

zelon88

You're taking about pre-Clinton consumerism. That system is dead. It used to dictate that the company who could offer the best value deserved to take over most of the market.

That's old thinking. Now we have servitization. Now the business who can most efficiently offer value deserves the entire market.

Basically, iterate until you're the only one left standing and then never "sell" anything but licenses ever again.

12 days ago

Ekaros

The bait-and-switch model is absolutely amazing as well. Start by offering a service covered with ads. Then add paid tier to get rid of ads. Next add tier with payment and ads. And finally add ads back to every possible tier. Not to forget about keeping them in content all the time.

12 days ago

int_19h

To quote the email from Hulu that recently dropped into my inbox:

> We are clarifying that, as we continue to increase the breadth and depth of the content we make available to you, circumstances may require that certain titles and types of content include ads, even in our 'no ads' or 'ad free' subscription tiers.

So at this point they aren't even bothering to rename the tier from "ad free" even as they put ads in it. Or maybe it's supposed to mean "the ads come free with it" now? Newspeak indeed.

12 days ago

majormajor

This goes back to the release of the no-ads Hulu plan. Due at the time to fun shenanigans and weirdness around the exact licensing deals for a few shows. (At least one of those shows is VERY long-running now https://www.reddit.com/r/greysanatomy/comments/12prhpf/no_ad... - not sure if there have been any new ones through the years or currently )

12 days ago

webspinner

Yes that's definitely newspeak! It's also the reason why I run adblock. It's gotten me in trouble a few times with streaming services, they don't love it. I still run it.

11 days ago

Snowfield9571

What do you use that blocks ads from streaming services? I’ve had no luck.

6 days ago

mikestew

Oh, it has been that way with Hulu for at least a decade. Source: I paid money for their OG “ad-free” tier back in the day, only to end up seeing ads.

11 days ago

mikrotikker

Arrr matey climb aboard yer don't need Hulu where we're going

11 days ago

dustingetz

are they talking about trailers?

11 days ago

normalaccess

Advertising is just the surface layer—the excuse. Digital ads rely on collecting as much personal data as possible, but that data is the real prize. This creates a natural partnership with intelligence agencies: they may not legally collect the data themselves, but they can certainly buy access.

This isn’t new. Facebook, for example, received early funding from In-Q-Tel, the CIA’s venture capital arm, and its origins trace back to DARPA’s canceled LifeLog project—a system designed to track and catalog people’s entire lives. Big Tech and government surveillance have been intertwined from the start.

That’s why these companies never face real consequences. They’ve become quasi-government entities, harvesting data on billions under the guise of commerce.

12 days ago

zeroq

Years ago a friend working in security told me that every telco operator in Elbonia has to have a special room in their HQ that's available 24/7 to some goverment officials. Men in black come and go as they please, and while what is actually happening in that room remains a mystery, they can tap straight to the system from within with no restrictions or traceability.

Growing up in soviet bloc I took that story at face value. After all democracy was still a new thing, and people haven't invented privacy concerns yet.

Since then I always thought that some sort of cooperation between companies like Facebook or Google and CIA/DOD was an obvious thing to everyone.

12 days ago

somenameforme

PRISM [1] is the best evidence of how short-lived most people's memories are. Microsoft, Yahoo, Google, and Facebook were the first 4 members. It makes it pretty funny when companies like Apple (who also joined more than a decade ago) speak about trying to defend customer's privacy against government intrusion. There's so much completely cynical corporate LARPing for PR.

And if one wants to know why big tech from China isn't welcome, be it phones or social media, it's not because fear of them spying on Americans, but because of the infeasibility of integrating Chinese companies into our own domestic surveillance systems.

[1] - https://en.wikipedia.org/wiki/PRISM

12 days ago

LargoLasskhyfv

Magagagia https://en.wikipedia.org/wiki/Room_641A

Allallarmia https://de.wikipedia.org/wiki/Sichere_Inter-Netzwerk_Archite...

(Really süperspeciälly VPN-hardware used to securely suck data out of ISPs with extradeutsche Gründlichkeit,

mandatory to be installed by law,

just in case,

for some random chase.)

Edit: Thinking of it this is bubbling up https://en.wikipedia.org/wiki/Dagger_Complex ,

where Magagagia built some little base just 'a stones throw' away from Allallarmias former monopol government Telcos early internet exchange and HQ .

( https://de.wikipedia.org/wiki/Fernmeldetechnisches_Zentralam... )

What are the odds?

11 days ago

[deleted]
12 days ago

theoreticalmal

Elbonia? How much mud did the men in black have to wade through to get to the secret room?

11 days ago

weikju

Nice story, but…

> Years ago a friend working in security told me that every telco operator in Elbonia

See info about the fictional country of Elbonia here, from the Dilbert comics:

[0] https://en.wikipedia.org/wiki/Dilbert

12 days ago

webspinner

It's been happening since the invention of the internet. Wait, probably because that's where it came from. OK OK not the web itself, but first there was ARPANET.

11 days ago

[deleted]
12 days ago

CrimsonCape

If you have ever seen the prank interview between Elijah Wood and Dominic Monaghan, "Do you wear wigs? Have you worn wigs? Will you wear wigs?" and Elijah breaks down laughing in total shock at how hilariously bad the interview is...

...I just picture a similar conversation with a CEO going: "Sir, shareholders want to see more improvement this quarter." CEO: "Do we run ads? Have we run ads? Will we run ads this time?" (The answer is inevitably yes to all of these)

12 days ago

[deleted]
12 days ago

smgit

Some one has to pay for those Ads.

That creates limits to growth of an Ad based ecosystem.

So the thing to pay attention too is not Revenue growth or Profit growth of a Platform but Price of an Ad, Price to increase reach, price to Pay to Boost your post, price of a presidential campaign etc etc. These prices cant grow forever just like with housing prices or we get the equivalent of a Housing Bubble.

Want to destabilize the whole system pump up ad prices.

12 days ago

the_other

This doesn’t make sense to me. Ads on the main networks are sold by auction. Price pumping is built into the system.

12 days ago

[deleted]
11 days ago

PaulDavisThe1st

I prefer the angle that describes this as a shift from value production to value extraction. Value production means coming up with new goods or services, or new/better ways to make existing ones. Value extraction means looking at existing economic exchanges, and figuring out how to get X percent of some of them.

12 days ago

idle_zealot

It was always a game of maximizing captured value. In such a game, creating value and capturing some portion of what you're producing is far less effective than value extraction, moving value around such that it's you capturing it, not someone else. A market, then, will by default encourage the latter strategy over the former. However, if the society in charge of a market observes value extraction occurring, it can respond by outlawing the particular extraction strategy being employed, and punish the parties participating. Then, for some time, market participants will turn to producing value instead, making more humble profits, until another avenue for extraction becomes available and quickly becomes the dominant strategy again. This cycle continues until the market eats the forces that would seek to regulate it and reign in extractive practices. That is what we're seeing here, at least in the US there is basically no political will behind identifying and punishing any new forms of harmful behavior, and we barely enforce existing laws regarding eg monopolies. Common wisdom among neoliberals and conservatives both is that big companies are good for the economy, and it's best to tread lightly in terms of regulating their behaviors, lest we interrupt their important value production process. One wonders if there are perhaps financial incentives to be so pro-corporate.

12 days ago

PaulDavisThe1st

I would argue that since the dawn of capitalism (whenever you place that), there have been moral structures in place to promote value production and stigmatize value extraction. The precise balance between the two moral verdicts changes back and forth over time. In the USA in the 21st century we seem to have entered a period where the promotion of value production is unusually low and simultaneously the stigmatization of value extraction has dropped close to zero.

12 days ago

soco

All the more ironic nowadays because the most popular politicians are the highest value extractors who moved the value production overseas, leaving the now jobless voters angry instead with... immigrants/lgbtqa+/other races/other religions, who basically had no saying and no role in the above move.

12 days ago

HeavyStorm

Licenses == Rent

That's why it's being tentatively called "Technofeudalism".

11 days ago

_DeadFred_

Don't forget get your stock on and index that almost all retirement funds are required to put money into every month versus the old school stock market where it was a market not a cable bill (you have to pay for the whole bundle if you want it or not).

12 days ago

nradov

It's easy to set up an IRA where you can trade individual securities instead of index funds if that's what you want. Most people aren't competent traders and will underperform the index funds.

12 days ago

[deleted]
12 days ago

z2

Historically, unchecked corporate power tends to mirror the flaws of the systems that enable it. For example, the Gilded Age robber barons exploited weak regulations, while tech giants thrive on data privacy gray areas. Maybe the problem isn’t size itself, but the lack of guardrails that scale with corporate influence (e.g., antitrust enforcement, environmental accountability, or worker protections), but what do I know!

I guess corrupt cop vs serial killer is like amorality (profit-driven systems) vs immorality (active malice)? A company is a mix of stakeholders, some of whom push for ethical practices. But when shareholders demand endless growth, even well-intentioned actors get squeezed.

12 days ago

nonrandomstring

> amorality

That word comes with a lot of boot-up code and dodgy dependencies.

I don't like it.

Did Robert Louis Stevenson make a philosophical error in 1882 supposing that a moral society (with laws etc) can contain within itself a domain outside of morals [0]?

What if coined the word "alegal"?

Oh officer... what I'm doing is neither legal nor illegal, it's simply alegal "

[0] https://edrls.wordpress.com/2021/02/16/a-moral/

12 days ago

jongjong

Agreed, I think part of it boils down to the concept of 'limited liability' itself which is a euphemism for 'the right to carry out some degree of evil without consequence.'

Also, scale plays a significant part as well. Any high-exposure organization which operates on a global scale has access to an extremely large pool of candidates to staff its offices... And such candidate pools necessarily include a large number of any given personas... Including large numbers of ethically-challenged individuals and criminals. Without an interview process which actively selects for 'ethics', the ethically-challenged and criminal individuals have a significant upper-hand in getting hired and then later wedging themselves into positions of power within the company.

Criminals and ethically-challenged individuals have a bigger risk appetite than honest people so they are more likely to succeed within a corporate hierarchy which is founded on 'positive thinking' and 'turning a blind eye'. On a global corporate playing field, there is a huge amount of money to be made in hiding and explaining away irregularities.

A corporate employee can do something fraudulent and then hold onto their jobs while securing higher pay, simply by signaling to their employer that they will accept responsibility if the scheme is exposed; the corporate employer is happy to maintain this arrangement and feign ignorance while extracting profits so long as the scheme is kept under wraps... Then if the scheme is exposed, the corporations will swiftly throw the corporate employee under the bus in accordance to the 'unspoken agreement'.

The corporate structure is extremely effective at deflecting and dissipating liability away from itself (and especially its shareholders) and onto citizens/taxpayers, governments and employees (as a last layer of defense). The shareholder who benefits the most from the activities of the corporation is fully insulated from the crimes of the corporation. The scapegoats are lined up, sandwiched between layers of plausible deniability in such a way that the shareholder at the end of the line can always claim complete ignorance and innocence.

12 days ago

mananaysiempre

Most suggestions of this nature fail to explain how they will deal with the problem of people just seeing there’s no point in trying for more. On a personal level, I’ve heard people from Norway describe this problem for personal income tax—at some point (notably below a typical US senior software engineer’s earnings) the amount of work you need to put in for the marginal post-tax krone is so high it’s just demotivating, and you either coast or emigrate. Perhaps that’s not entirely undesirable, but I don’t know if people have contemplated the consequences of the existence of such a de facto ceiling seriously.

12 days ago

BrenBarn

> Most suggestions of this nature fail to explain how they will deal with the problem of people just seeing there’s no point in trying for more. On a personal level, I’ve heard people from Norway describe this problem for personal income tax—at some point (notably below a typical US senior software engineer’s earnings) the amount of work you need to put in for the marginal post-tax krone is so high it’s just demotivating, and you either coast or emigrate. Perhaps that’s not entirely undesirable, but I don’t know if people have contemplated the consequences of the existence of such a de facto ceiling seriously.

I think if you look at quality of life and happiness ratings in Norway it's pretty clear it's far from "entirely undesirable". It's good for people to do things for reasons other than money.

12 days ago

buckle8017

Norway is Saudi Arabia with snow.

Their entire economy and society are structured around oil extraction.

There are no lessons to learn from Norway unless you live somewhere that oil does from the ground.

12 days ago

Retric

Hardly per capita they export similar amounts of petroleum products, but Norway’s GDP is 80k/person vs 30k/person in Saudi Arabia. Norway exports slightly more/person but their production costs are significantly higher which offsets it.

The difference is Norway’s economy being far less dependent on petroleum which is only 40% of their exports.

12 days ago

[deleted]
12 days ago

dec0dedab0de

And the middle ground is to only enforce it on corporations in exchange for the protections given to the owners.

Want to make more? then take personal risk.

12 days ago

ilbeeper

Great, so we only want the real high risk takers, the top gamblers,to play in the big league. Those who are so rich they no way to lose their personal comfort and are blind to the personal risk - and probably are careless about anyone's else just as well

12 days ago

mech422

Don't we have that already? Bootstrapped startups with the founders money on the line typically don't play in the 'big league's till way after the founder is at risk..

12 days ago

robertlagrant

> Great, so we only want the real high risk takers, the top gamblers,to play in the big league

It either takes risk of private capital or future taxpayers' taxes to create big leagues. I'd take the former over the latter.

11 days ago

ilbeeper

And I prefer cold committee who measure risk and are committed to some public values. You choose silicone valley, VCs and no public healthcare. I prefer the Norwegian model.

10 days ago

robertlagrant

It works great when the innovation happens elsewhere and is freely shared.

9 days ago

sweeter

We're talking about corporations here, where are they going to go? If you had a competent government, you would say "fine, then leave. But your wealth and business is staying here" at some point the government has to do its job. These corporations pull in trillions of dollars, its wild to me to suggest that suddenly everyone would stop working and making money because they were taxed at a progressive rate. Its an absurd assumption to begin with.

We could literally have high speed rail, healthcare, the best education on the planet and have a high standard of living... and it would be peanuts to them. Instead we have a handful of people with more wealth than 99% of everyone else, while the bottom 75% of those people live in horrifying conditions. The fact that medical bankruptcy is a concept only in the richest country on earth is deeply embarrassing and shameful.

12 days ago

sfn42

Those people are full of shit. I'm Norwegian and a software engineer. Income tax generally tops out at about 46%, if you earn $200k you'll pay around 40%, at $500k you'll pay like 46% or so and it doesnt go much higher than that even if you earn a million dollars (10 million nok).

So the difference between earning a decent salary of $80-100k and a great salary north of $150k isn't much tax-percent-wise. If you make another $1000 you take home about $500.

Also keep in mind we don't have to pay for health insurance, we don't have to pay for our kids to go to school, if we get sick and can't work we have a social security net that will take care of us indefinitely. Norway is a great place to live. The people who complain about taxes are idiots who don't know how good they have it. If you make $200k+ you're living a fucking great life, if you make $400k it's even better. Hell i used to make like $35k and I got by on that. $50k is perfectly liveable. And those people pay like 20-25%.

I'm happy to pay taxes, I'm doing great and I don't even earn that much yet. I expect to nearly double my salary within the next 5ish years. Maybe more than double.

Then you have middle class+ Norwegians with a big house, $100k+ car, sweet boat, cabin in the mountains etc complaining about taxes. Man shut up you're literally top .1% in the world you won the damn lottery.

11 days ago

abdullahkhalids

Higher taxes is the wrong solution to the very valid problem.

We all recognize that a democracy is the correct method for political decision making, even though it's also obvious that theoretically a truly benevolent dictator can make better decisions than an elected parliament but in practice such dictators don't really exist.

The same reasoning applies to economic decision making at society level. If you want a society whose economics reflects the will and ethics of the people, and which serves for the benefit of normal people, the obvious thing is the democratize economic decision making. That means that all large corporations must be mostly owned by their workers in roughly 1/N fashion, not by a small class of shareholders. This is the obvious correct solution, because it solves the underlying problem, not paper of the symptoms like taxation. If shareholder owned corporations are extracting wealth from workers or doing unethical things, the obvious solution is to take away their control.

Obviously, some workers will still make their own corporations do evil things, but at least it will be collective responsibility, not forced upon them by others.

12 days ago

robertlagrant

The alternative is to make consumption the will of the people, so people buy things they want, and from vendors they like.

I think "this isn't free; you pay with ad views and your data is sold" is something that should be on a price tag on services that operate this way, though. It doesn't work if the price isn't clearly advertised.

11 days ago

robertlagrant

> but I don’t know if people have contemplated the consequences of the existence of such a de facto ceiling seriously.

One of the second order consequences of progressive taxation is that it increases gross wages for higher earners, as people care about their net pay being larger, not their gross pay.

An extreme example, in the UK the tax rate is an effective 60% between £100k and £120k (ish), so people's salaries get driven through that zone quickly. This obviously means there's less money to give to other people.

11 days ago

yodsanklai

> the amount of work you need to put in for the marginal post-tax krone is so high it’s just demotivating

This is a cliche you hear from right winger in any country that has a progressive tax system.

Regarding Norway, taxes aren't in the same ballpark as in some US blue states.

Also, it's a very simplistic view to think that people are only motivated by money. Counter examples abound.

12 days ago

robertlagrant

> This is a cliche you hear from right winger in any country that has a progressive tax system.

This ad hominem stuff is very out of place. Why not solely engage with the argument?

11 days ago

robocat

> This is a cliche you hear from right winger in any country that has a progressive tax system

Not a cliché - a fact. I'll explain to you.

The incentive structure of progressive taxation is wrong: it only works for the few percent that are extremely money hungry: the few that are willing to work for lower and lower percentage gains.

Normal people say "enough" and they give up once they have the nice house and a few toys (and some retirement money with luck). In New Zealand that is something like USD1.5 million.

I'm on a marginal rate of 39% in New Zealand. I am well off but I literally am not motivated to try and earn anything extra because the return is not enough for the extra effort or risk involved. No serial entrepreneurship for me because it only has downside risk. If I invest and win then 39%+ is taken as tax, but even worse is that if I lose then I can't claim my time back. Even financial losses only claw back against future income: and my taxable income could move to $0 due to COVID-level-event and so my financial risk is more than what it might naively appear.

Taxation systems do not fairly reward for risk. Especially watch people with no money taking high risks and pay no insurance because the worst that can happen to them is bankruptcy.

New Zealand loses because the incentive structure for a founder is broken. We are an island so the incentive structure should revolve around bringing in overseas income (presuming the income is spent within NZ). Every marginal dollar brought into the economy helps all citizens and the government.

The incentives were even worse when I was working but was trying to found a company. I needed to invest time, which had the opportunity cost of the wages I wouldn't get as a developer (significant risk that can't be hedged and can't be claimed against tax). 9 times out of 10 a founder wins approximately $0: so expected return needs to be > 10x. A VC fund needs something like > 30x return from the 1 or 2 winning investments. I helped found a successful business but high taxation has meant I haven't reached my 30x yet - chances are I'll be dead before I get a fair return for my risk. I'm not sure I've even reached 10x given I don't know the counterfactual of what my employee income would have become. This is for a business earning good export income.

Incentive structures matter - we understand that for employees - however few governments seem to understand that for businesses.

Most people are absolutely ignorant of even basic economics. The underlying drive is the wish to take from those that have more than them. We call it the tall poppy syndrome down here.

(reëdited to add clarity)

12 days ago

roca

I'm also on the 39% marginal income tax rate in New Zealand. That income tax rate isn't the problem. Keeping $60K out of every $100K extra salary I make is plenty of motivation to work harder to make the extra $100K... especially because the taxes paid aren't burned, they mostly go to things I care about.

The income tax rate isn't all that relevant to the costs and benefits of starting a company, so I don't understand that part of your story. The rewards for founding a successful company mostly aren't subject to income tax, and NZ has a very light capital gains regime.

I have started my own company and I do agree that there are some issues that could be addressed. For example, it would be fairer if the years I worked for no income created tax-deductible losses against future income.

But NZ's tax rates are lower than Australia and the USA and most comparable nations, and NZers start a lot of businesses, so I don't think that is one of our major problems at the moment.

12 days ago

robocat

> For example, it would be fairer if the years I worked for no income created tax-deductible losses against future income.

Hard to avoid cheaters.

A policy could be that the government could pay for 2 years of current salary and you only get one chance per person -- however I can't imagine how the government could get that into the budget.

The policy I implied is be to reward winners with a tax break to offset their risk. Difficult to sell to any voters that don't understand risk/reward or voters that believe business owners are greedy worthless bâtards.

Ha: if the business fails you lose money (the wages you didn't receive), and if the business wins you are taxed: "Privatise the losses, socialise the gains" ;)

11 days ago

robocat

> Keeping $60K out of every $100K extra salary I make is plenty of motivation to work harder

That's good that it motivates you. It doesn't motivate me any more. I'm not interested in "investing" more time for the reasons I have said.

> the taxes paid aren't burned, they mostly go to things I care about.

I'm pleased for you. I'd like to put more money towards things I care about.

> The income tax rate isn't all that relevant to the costs and benefits of starting a company

I am just less positive than you: it feels like win you lose, lose you lose bigger. I'm just pointing out that our government talks about supporting businesses but I've seen the waste from the repetitive attempts to monetise our scientific academics.

> The rewards for founding a successful company mostly aren't subject to income tax

Huh? Dividends are income. Or are you talking about the non-monetary rewards of owning a business?

> NZ has a very light capital gains regime

Which requires you to sell your company to receive the benefits of the lack of CGT. So every successful business in NZ is incentivised to sell. NZ sells it's jewels. Because keeping a company means paying income tax every year. NZ is fucking itself by selling anything profitable - usually to foreign buyers.

The one big ticket item I would like to save for is my retirement fund. But Labour/Greens want to take 50% to 100% of capital if you have over 2 million. A bullshit low amount drawdown at 4% is $80k/annum before tax LOL. Say investments go up by 6% per year and you want to withdraw 4%. Then a 2% tax is 100% of your gains. Plus I'm certain they will introduce means testing for super before I am eligible. And younger people are even more fucked IMHO. The reality is I need to plan to pay for the vast majority of my own costs when I retire, but I get to pay to support everybody else. I believe in socialist health care and helping our elderly, but the country is slowly going broke and I can't do much about that. I believe that our government will take whatever I have carefully saved - often to pay for people that were not careful (My peer-group is not wealthy so I see the good and the bad of how our taxes are spent). Why should I try to earn more to save?

12 days ago

roca

> It doesn't motivate me any more.

I find it hard to understand how $60K means no motivation but $100K would be highly motivating.

> I'd like to put more money towards things I care about.

You said later that you care about the public health system and helping the elderly. That's where a large percentage of our taxes go.

> Huh? Dividends are income. Or are you talking about the non-monetary rewards of owning a business?

No, I'm talking about selling all or part of the business. I agree with you that it's a problem our businesses often sell out to overseas interests who hollow out the company. But the general pattern of making most of your money by selling shares in the business is completely normal worldwide.

11 days ago

robocat

Perhaps "loss aversion" is important to me: I'm not a spherically rational Homo economicus.

Our society mostly works because of our non-monetary rewards, not because of monetary incentives. My teaching and nursing friends work for their own satisfaction, and more money is not a high priority to them.

I am not particularly motivated by money. I suspect you are a businessperson that believes money is strongly motivating? I chased financial success for 15 years when I started from $0: however I now hope I have enough and I hope it won't be unfairly taken from me. Yes, money was a big incentive then (and my personal costs have been very high), but now I have other goals.

I suspect I psychologically find high marginal taxation demotivating (48% if we include GST). Maybe because I have too many acquaintances and family sucking at the government tit. I see where government money goes because I have a wide variety of acquaintances including retirees, elderly, unhealthy, and unemployed. Yeah, I know they are not living the high life (well, maybe my drug-abusing anti-social acquaintances think they are).

> No, I'm talking about selling all or part of the business

Which requires an intense amount of work, and sometimes a significant loss, and usually requires selling 100%... Why should I sell at 4x earnings when I can hold on to the business - even if I don't want it? Taxation has too much influence on my investments because rebalancing across other investments has too high a cost/risk.

I guess I'm an idealist. I believe in startups, and I believe they help all New Zealanders. But the incentives of our taxation system mean that founding a startup is foolish: I don't recommend to anyone that they should be a founder (even though I have won the gamble). The unrecoverable costs of anything but spectacular success are too high. The non-monetary rewards are poor in my experience. The expected median return for a startup founder is about $0. Our social systems and taxation systems need to encourage business inception and growth so that all of NZ can be better off.

Thank you for your questions. It is always good to be asked why!

11 days ago

tanjtanjtanj

I’ve seen a lot of people in European countries and former European colonies decry the high tax rate as a reason for low entrepreneurship and just accepted it as a good enough reason but looking at the numbers and the reasoning specifically here made me start questioning things.

The marginal rate in NZ is 39%!? That’s LOWER than in California, the land of “serial entrepreneurship”, for anyone with a successful startup. Not to mention the US tax rate doesn’t include a myriad of other small taxes that for some reason are not included in that number. On top of having a higher tax rate the average Californian entrepreneur also has to source extremely expensive healthcare.

It sounds more like an excuse to keep doing what you already wanted to do rather than an actual demotivating factor.

11 days ago

robocat

Mentioning "myriad of other small taxes" in California just shows your unbalanced bias: NZ has a myriad of other costs that California doesn't.

Sales tax 15%, 91 petrol USD5.34/gallon, means testing for many things, no tax friendly retirement savings (IRAs/ROTHs whatever). Auckland housing is less affordable than San Francisco https://www.visualcapitalist.com/least-affordable-cities-to-...

I pay for private healthcare insurance because I want better outcomes than waiting for years to get urgent surgery. I have seen loved ones literally killed by our healthcare system (unnecessary death - not just normal risks of medicine). Our public health system is good when it works but it has some sharp edges. Although I assume poor NZers are better off than poor Californians for heathcare access.

> It sounds more like an excuse to keep doing what you already wanted to do rather than an actual demotivating factor.

I am telling you that it demotivates me. We don't always know why we think things and I don't have to be perfectly rational. You might be right, but calling it an excuse is extremely rude.

11 days ago

giantg2

"the amount of work you need to put in for the marginal post-tax krone is so high it’s just demotivating"

Sounds like the effort needed for bonuses here in the US. Why try if the amount is largely arbitrary and generally lower than your base salary pay rate when you consider all the extra hours. Everything is a sham.

12 days ago

nradov

Which industry? Bonuses in the tech industry tend to be somewhat arbitrary and thus ineffective for motivating employees. Bonuses in other industries like trading or investment banking tend to be larger (sometimes more than base salary) and directly tied to individual performance and so they're highly effective at motivating ambitious employees.

Increasing marginal income tax rates on highly compensated employees might be a good policy overall. But where are we on the Laffer curve? If we go too far then it really hurts the overall economy.

12 days ago

sudoshred

As scale grows the moral ambiguity does also. Megacorps default to “evil” because action in a large number of circumstances for a large number of events does as well, particularly when economic factors are motivating behavior (implicitly or explicitly). Essentially being “non-evil” becomes more expensive than the value it adds. There is always someone on the other end of a transaction, by definition.

12 days ago

webspinner

Right! I was going to say something like that. Google is in all honesty, corrupt. Then again, most big corporations are this way. Google and Microsoft seem to be a bit more than others, though.

11 days ago

h0l0cube

> being willing to do anything for money has historically been categorized as evil behavior

Even megacorps will do categorically good things if it helps their bottom line.

11 days ago

Retric

We judge morality when there’s some meaningful downside and people at their worst, because a little unpleasantness can dramatically outweigh a lot of nice behavior.

“I love hanging out with Tim he’s a funny guy helped me move a couch last week, kind of which he hadn’t pushed me in front of that bus that one time but ehh I doubt he’d do that again…”

11 days ago

h0l0cube

Being pushed in front of a bus would be rather unpleasant.

11 days ago

ericmay

My problem with this take is that you forget, the corporations are made up of people, so in order for the corporation to be evil you have to take into account the aggregate desires and decision making of the employees and shareholders and, frankly, call them all evil. Calling them evil is kind of a silly thing to do anyway, but you can not divorce the actions of a company from those who run and support it, and I would argue you can't divorce those actions from those who buy the products the company puts out either.

So in effect you have to call the employees and shareholders evil. Well those are the same people who also work and hold public office from time to time, or are shareholders, or whatever. You can't limit this "evilness" to just an abstract corporation. Not only is it not true, you are setting up your "problem" so that it can't be addressed because you're only moralizing over the abstract corporation and not the physical manifestation of the corporation either. What do you do about the abstract corporation being evil if not taking action in the physical world against the physical people who work at and run the corporation and those who buy its products?

I've noticed similar behavior with respect to climate change advocacy and really just "government" in general. If you can't take personal responsibility, or even try to change your own habits, volunteer, work toward public office, organize, etc. it's less than useless to rail about these entities that many claim are immoral or need reform if you are not personally going to get up and do something about it. Instead you (not you specifically) just complain on the Internet or to friends and family, those complaints do nothing, and you feel good about your complaining so you don't feel like you need to actually do anything to make change. This is very unproductive because you have made yourself feel good about the problem but haven't actually done anything.

With all that being said, I'm not sure how paying vastly higher taxes would make Google (or any other company) less evil or more evil. What if Google pays more taxes and that tax money does (insert really bad thing you don't like)? Paying taxes isn't like a moral good or moral bad thing.

12 days ago

Retric

> made up of people

People making meaningful decisions at mega corporations aren’t a random sample of the population, they are self selected to care a great deal about money and or power.

Honestly if you wanted to filter the general population to quietly discover who was evil I’d have a hard time finding something more effective. It doesn’t guarantee everyone is actually evil, but actually putting your kids first is a definite hindrance.

The morality of the average employee on the other hand is mostly irrelevant. They aren’t setting policies and if they dislike something they just get replaced.

12 days ago

ericmay

You'd never figure out who was "evil" because it's just based on your own interpretation of what evil is. Unless of course you want to join me as a moral objectivist? I don't think Google doing military work with the US government is evil. On the other and I think the influence and destruction caused by advertising algorithms is. Who gets to decide what is evil?

I take issue with "don't blame the employees". You need people to run these organizations. If you consider the organization to be evil you don't get to then say well the people who are making the thing run aren't evil, they're just following orders or they don't know better. BS. And they'd be replaced if they left? Is that really the best argument we have against "being evil"?

Sorry I'd be less evil but if I gave up my position as an evil henchman someone else would do it! And all that says anyway is that those replacing those who leave are just evil too.

If you work at one of these companies or buy their products and you literally think they are evil you are either lying to yourself, or actively being complicit in their evil actions. There's just no way around that.

Take personal responsibility. Make tough decisions. Stop abstracting your problems away.

12 days ago

Retric

If your defense is trying to argue about what’s evil, you’ve already lost.

Putting money before other considerations is what’s evil. What’s “possible” expands based on your morality it doesn’t contract. If being polite makes a sale you’re going to find a lot of polite sales people, but how much are they willing to push that expended warrantee?

> Sorry I'd be less evil but if I gave up my position as an evil henchman someone else would do it!

I’ve constrained what I’m willing to do and who I’m willing to work for based on my morality, have you? And if not, consider what that say about you…

12 days ago

ericmay

> Putting money before other considerations is what’s evil.

Depends on the considerations and what you consider to be evil. My point wasn't to argue about what's evil, of course there is probably a few hundred years of philosophy to overcome in that discussion, but to point out that if you truly think an organization is evil it's not useful to only care about the legal fiction or the CEO or the board that you won't have any impact on - you have to blame the workers who make the evil possible too, and stop using the products. Otherwise you're just deceiving yourself into feeling like you are doing something.

12 days ago

Retric

Again, you say that as if I am using the products of companies I consider evil.

The fact you assume people are going to do things they believe to be morally reprehensible is troubling to me.

I don’t assume people need to be evil to work at such companies because I don’t assume they notice the same things I do.

12 days ago

ericmay

I was writing about the general case. I apologize if that wasn't clear from the start. I don't know anything about you personally though I'm sure we'd have some great conversations over a glass of wine (or coffee or whatever :) )!

> The fact you assume people are going to do things they believe to be morally reprehensible is troubling to me.

This seems to be very common behavior in my experience. Perhaps the rhetoric doesn't match the true beliefs. I'm not sure.

12 days ago

Retric

Ahh ok, sorry for misunderstanding you.

12 days ago

ericmay

It's my fault. Sometimes I'm not very clear.

12 days ago

robertlagrant

> I’ve constrained what I’m willing to do and who I’m willing to work for based on my morality, have you? And if not, consider what that say about you…

This sort of discussion gets a bit tricky because it often turns out one person is not having a discussion; they're trying to advertise something about themselves.

11 days ago

Retric

I’m not really judging other people here. I remember working on a project and realizing I was one of those cogs keeping ICBM’s operating and it really just hit home.

Not thinking anything about who you’re working for is just kind of the default. However, IMO if you do feel something is wrong then that’s when the obligation to carry through comes in.

11 days ago

robertlagrant

I don't think it's the default. Lots of people think about what they do, in my experience. If you think ICBMs are purely bad, fair enough, but I imagine lots of people believe they - particularly when not fired - perform a vital defensive service, and are worth working on for that reason.

11 days ago

int_19h

A large corporation is more than the sum of its owners and employees, though. Large organizations in general have an emergent phenomenon - past a certain threshold, they have a "mind of it own", so to speak, which - yes - still consists of individual actions of people making the organization, but those people are no longer acting as they normally would. They get influenced by corporate culture, or fall in line because they are conditioned to socially conform, or follow the (morally repugnant) rule book because otherwise they will be punished etc. It's almost as if it was a primitive brain with people as neurons, forced into configurations that, above all, are designed to perpetuate its own existence.

12 days ago

AndyNemmity

Corporations are totalitarian systems. Just because the dictatorship has people, doesn't indicate anything about it.

11 days ago

ericmay

It's a voluntary totalitarian system though - you don't have to work at (insert company you think is evil) so your comparison falls short.

10 days ago

AndyNemmity

A choice between a totalitarian system or starvation is not a choice.

2 days ago

sweeter

You could use this logic to posit that any government, group, system, nation state, militia, business, or otherwise, isn't "evil" because you haven't gauged the thoughts, feelings and actions of every single person who comprises that system. Thats absurd.

If using AI and other technology to uphold a surveillance state, wage war, do imperialism, and do genocide... isn't evil, than I don't know if you can call anything evil.

And the entire point of taxes is that we all collectively decide that we all would be better off if we pooled our labor and resources together so that we can have things like a basic education, healthcare, roads, police, bridges that don't collapse etc.. Politicians and corporations have directly broken and abused this social contract in a multitude of ways, one of those ways is using loopholes to not pay taxes at the same rate as everyone else by a large margin... another way is paying off politicians and lobbying so that those loopholes never get closed, and in fact, the opposite happens. So yes, taxing Google and other mega-corporations is a single, easily identifiable, action that can be directly taken to remedy this problem. Though, there is no way around solving the core issue at hand, but people have to be able to identify that issue foremost.

12 days ago

CrillRaver

By definition we can never know for sure, but I believe the number of people who stay silent is many times bigger than those who voice their opinion. They've learned it is unproductive (as you say) or worst case, you're told you've got it all wrong technically speaking.

Complaining is not unproductive, it signals to others they are not alone in their frustrations. Imagine that nobody ever responds or airs their frustrations; would you feel comfortable saying something about it? Maybe you're the only one, better keep quiet then. Or how do you find people who share your frustrations with whom you could organise some kind of pushback?

If I was "this government", I would love for people to shut up and just do their job, pay taxes and buy products (you don't have to buy them from megacorp, just spend it and oh yeah, goodluck finding places to buy products from non-megacorps).

12 days ago

ericmay

My point was that complaining isn't enough and in my experience most people just complain but don't even take the smallest action in line with their views because it inconveniences them. Instead they lull themselves to sleep that something was done because they complained about it, and there's no need to adjust anything in their lives because they "did all they can do".

Instead of taking action they complain, set up an abstract boogeyman to take down, and then nobody can actually take action to make the world better (based on their point of view) because there's nothing anyone can do about Google the evil corporation because it's just some legal fiction. Bonus points for moralizing on the Internet and getting likes to feel even better about not doing anything.

But you can do something. If someone thinks Google is evil they can stop using Gmail or other Google products and services, or even just reduce their usage - maybe you can switch email providers but you only have one good map option. Ok at least you did a little more than you did previously.

12 days ago

8note

corporations, separate from the people in them, are set up in a way that incentivizes bad behaviour, based on which stake holders are considered and when, along with what mechanisms result in rewards and which ones get you kicked out.

the architecture of the system is imperfect and creates bad results for people.

12 days ago

BrenBarn

I don't really agree with some of your assumptions. At many companies, many of the people also are evil. Many people who hold shares and public office are also evil.

I don't think it's necessary to conclude that because a company is evil then everyone who works at the company is evil. But it's sort of like the evilness of the company is a weighted function of the evilness of the people who control it. Someone with a small role may be relatively good while the company overall can still be evil. Someone who merely uses the company's products is even more removed from the company's own level of evil. If the company is evil it usually means there is some relatively small group of people in control of it making evil decisions.

Now, I'm using phraseology here like "is evil" as a shorthand for "takes actions that are evil". The overall level of evilness or goodness of a person is an aggregate of their actions. So a person who works for an evil company or buys an evil company's products "is evil", but only insofar as they do so. I don't think this is even particularly controversial, except insofar as people may prefer alternative terms like "immoral" or "unethical" rather than "evil". It's clear people disagree about which acts or companies are evil, but I think relatively few people view all association with all companies totally neutrally.

I do agree with you that taking personal responsibility is a good step. And, I mean, I think people do that too. All kinds of people avoid buying from certain companies, or buy SRI funds or whatever, for various ethically-based reasons.

However, I don't entirely agree with the view that says it's useless or hypocritical to claim that reform is necessary unless you are going to "do something". Yes, on some level we need to "do something", but saying that something needs to be done is itself doing something. I think the idea that change has to be preceded or built from "saintly" grassroots actions is a pernicious myth that demotivates people from seeking large-scale change. My slogan for this is "Big problems require big solutions".

This means that it's unhelpful to say that, e.g., everyone who wants regulation of activities that Company X does has to first purge themselves of all association with Company X. In many cases a system arises which makes such purges difficult or impossible. As an extreme, if someone lives in an area with few places to get food, they may be forced to patronize a grocery store even if they know that company is evil. Part of "big solutions" means replacing the bad ways of doing things with new things, rather than saying that we first have to get rid of the bad things to get some kind of clean slate before we can build new good things.

12 days ago

ajdude

> while megacorps aren't fundamentally "evil" (for some definitions of evil)

A couple years ago, my state banned single use plastic bags. The very moment they did, all of my local Walmarts switched to heavier plastic bags that technically weren't single use. They still gave them away for free just as they did with the first ones. (These we're good quality bags and I was frustrated that Walmart didn't just give them away by default). Eventually my state banned those too, and like clockwork, Walmart was giving away paper bag bags -- decent quality ones, too. Though I still really liked the thicker plastic ones since I could use them for other things.

This made me realize that no corporation would do anything slightly better for the environment unless forced. I think this is the case for anything a corporation would do, including evil things. I think they just follow the money, no ethics, and it's up to the government to provide those ethics.

11 days ago

dylan604

What is Googs going to do, leave money on the table?

And if Googs doesn't do it, someone else will, so it might as well be them that makes money for their shareholders. Technically, couldn't activist shareholders come together and claim by not going after this market the leadership should be replaced for those that would? After all, share prices is the only metric that matters

12 days ago

r00fus

So "if I don't steal it someone else will"? I'd rate that as evil.

12 days ago

1024core

Maybe it's more like "If I don't do this job, someone else will"...

12 days ago

perryizgr8

Then let them do it. You don't do what you consider immoral.

11 days ago

moralestapia

This is the big issue that came along when stable households (mom/dad taking care of you) were replaced by fentanyl and TikTok.

Moral character is something that has to be taught, it doesn't just come out on its own.

If your parents don't do it properly, you'll be just another cog in the soulless machine to which human life is of no value.

12 days ago

ErigmolCt

The real issue is that corporate incentives don't prioritize morality

12 days ago

moralestapia

Corporations are run by people, who are not amoral.

12 days ago

ErigmolCt

People may not be amoral, but corporate structures often incentivize behavior that prioritizes profit over morality

10 days ago

greenchair

bingo. taught and reinforced with consequences.

12 days ago

dylan604

If you want to take it so far off topic, then sure, go ahead with it.

12 days ago

elliotto

I think the poster is applying your statement about leaving money on the table. Structural requirements to not leave money on the table is a Moloch results that leads to the deterioration of the system into being just stealing as much shit as possible.

12 days ago

bjackman

This is what the parent comment _means_ IMO.

What are you are saying is: optimising for commercial success is incompatible with morality. The conclusion is that publicly traded megacorps must inevitably trend towards amorality.

So yes, they aren't "evil" but I think amorality is the closest thing to "evil" that actually exists in the real world.

12 days ago

stevage

I don't buy that argument. There are things Google does better than competitors, so them doing an evil thing means they are doing it better. Also, they could be spending those resources on something less evil.

12 days ago

dylan604

Remember when the other AI companies wanted ClosedAI to stop "for humanity's sake" when all it meant was for them the catch up? None of these companies are "good". They all know that as soon as one company does it, they all must follow, so why not lead?

12 days ago

olyjohn

Ah yeah. Everybody else is doing it, so it must be okay to do. Fuck everything about this.

12 days ago

dzhiurgis

> Google does better than competitors

You need to try another search engine. Years ago...

12 days ago

gizmondo

Activist shareholders can claim whatever they want, at the end of the day it's just noise, founders control the company completely.

12 days ago

AndyNemmity

They are fundamentally totalitarian. Orders come from the top, and are taken from below.

Seems fundamentally evil.

11 days ago

DecoySalamander

By this definition, mom telling you to clean your room is also a form of totalitarianism, and yet you somehow find the strength to manage.

11 days ago

ninetyninenine

A megacorp is made up of people. So it's people who are fundamentally evil.

The main thing here I think is anonymity through numbers and complexity. You and thousands of others just want to see the numbers go up. And that desire is what ultimately influences decisions like this.

If google stock dropped because of this then google wouldn't do it. But it is the actions of humans in aggregate that keeps it up.

Megacorporations are scapegoats when in actuality they are just a set of democratic rules. The corporation is just a window into the true nature of humanity.

12 days ago

anon373839

You're half right. Corporations are just made of people. But, they're more than the sum of their parts. The numbers and complexity do more than provide anonymity: they provide a mechanism where individuals can work in concert to accomplish bad things in the aggregate, without (necessarily) requiring any particular individual to violate their conscience. It just happens through the power of incentives and specialization. If you're in upper management, the complexity also makes it easier to turn a blind eye to what is happening down below.

12 days ago

Barrin92

>A megacorp is made up of people. So it's people who are fundamentally evil.

That is to make a mistake of composition. An entity can have properties that none of its parts have. A cube made out of bricks is round, but none of the bricks are round. You might be evil, your cells aren't evil.

It's often the case that institutions are out of alignment with its members. It can even be the case that all participants of an organization are evil, but the system still functions well. (usually one of the arguments for markets, which is one such system). When creating an organization that is effectively the most basic task, how to structure it such that even when its individual members are up to no good, the functioning of the organization is improved.

12 days ago

ninetyninenine

But people are aware companies are evil. Why don't they sell the stock? Why do people still buy the stock?

Obviously because they don't give a shit.

12 days ago

energy123

Not a useful framing in my view. People follow private incentives. Private incentives are by default not perfectly aligned with external stakeholders. That leads to "evil" behavior. But it's not the people or the org, it's the incentives. You can substitute other people into the same system and get the same outcome.

12 days ago

ninetyninenine

Not useful, but ultimately true.

People have the incentive to not do evil and to do evil for money. When you abstract the evil away into 1 vote out of thousands then you abstract responsibility and everyone ends up in aggregate doing an inconsequential evil and it adds up to a big evil.

The tragedy of the commons.

12 days ago

mystified5016

> they are fundamentally unconcerned with goodness or morality

No, no. Call a spade a spade. This behavior and attitude is evil. Corporations under modern American capitalism must be evil. That's how capitalism works.

You succeed in capitalism not by building a better mousetrap, but by destroying anyone who builds a better moustrap than you. You litigate, acquire, bribe, and rewrite legislation to ensure yours is the best and only mousetrap available to purchase, with a token 'competitor' kept on life support so you can plausibly deny anticompetitive practices.

If you're a good company trying to do good things, you simply can't compete. The market just does not value what is good, just, or beneficial. The market only wants number to go up, and to go up right now at any cost. Amazon will start pumping out direct clones of your product for pennies. What are you gonna do, sue Amazon?! best of luck.

12 days ago

roca

"The market" is just a lot of people making decisions about what to do with their money. If you want the market to behave differently, be the change you want to see, and teach others to do the same.

12 days ago

nirav72

They’re not evil, they’re amoral and are designed to maximize profits for their investors. Evil is subjective.

12 days ago

kelipso

Paperclip maximizing robot making the excuse that it's just maximizing paperclips, that's what it was designed to do, there's even a statute saying that robots must do only what it was designed to do, so it's not evil just amoral.

Weird thing is for corporations, it's humans running the whole thing.

12 days ago

moralestapia

>Evil is subjective.

This is a meme that needs to die, for 99% of cases out there the line between good/bad is very clear cut.

Dumb nihilists keep the world from moving forward with regards to human rights and lawful behavior.

12 days ago

pseudalopex

> They’re not evil, they’re amoral

Most people consider neglect evil in my experience.

12 days ago

josefx

> they’re amoral and are designed to maximize profits

Isn't that a contradiction? Morality is fundamentally a sense of "right and wrong". If they reward anything that maximizes short term profit and punish anything that works against it then it appears to me that they have a simple, but clearly defined sense of morality centered around profit.

12 days ago

StanislavPetrov

Brings to mind the book, "The Banality of Evil".

Seems it would be informative to many of the people posting on this thread.

12 days ago

rmrf100

This is evil.

12 days ago

mainecoder

>Evil is subjective. Everything is subjective - moralist bro It's all priced in - Wall street bro learn to code - tech bro

12 days ago

newswasboring

> they are fundamentally unconcerned with goodness or morality,

I would argue that is fundamentally evil. Because evil pays the best. Its like drunk driving, on an empty road it can only harm you, but we live in a society full of other people.

12 days ago

layer8

“Drop” has really become ambiguous in headlines.

12 days ago

ErigmolCt

Ethical pledges from corporations, especially ones as large as Google, are PR tools first and foremost. They last only as long as they align with strategic and financial interests

12 days ago

abeppu

I guess a question becomes, how does dropping these self-imposed limitations work as a marketing exercise? Probably most of their customers or prospective customers won't care, but will a cheery multi-colored new product land a little differently? If Northrop Grumman made a smart home hub, you might be reluctant to put it in your living room.

12 days ago

HPMOR

They are dropping these pledges to avoid securities lawsuits. “Everything is securities fraud” and presumably if they have a stated corporate pledge to do something, and knowingly violate it, any drop in the stock price could use this as grounds.

12 days ago

a_shovel

Being a defense contractor isn't a problem that a little corporate rearrangement can't fix. Put the consumer division under a new subsidiary with a friendly name and you're golden. Even among the small percentage who know the link, it's likely nobody will really care. For certain markets ("tacticool" gear, consumer firearms) being a defense contractor is even a bonus.

12 days ago

lenerdenator

Marketing doesn't matter to oligarchs.

12 days ago

deadbabe

A megacorp is amoral. They have no concern over an individual anymore than a human has concern for an ant, because individuals simply don’t register to them. The ant may regard the human as pure evil for the destruction it rains upon its colony, but the ants are not even a thought in the human’s mind most of the time.

12 days ago

xeonmc

Don't anthropomorphize the lawnmower.

12 days ago

portaouflop

Megacorps are a form of slow AI in itself — totally alien to human minds and essentially uncontrollable

12 days ago

quesera

"We won't use your dollars and efforts for bad and destructive activities, until we accumulate enough of your dollars and efforts that we no longer care about your opinions".

12 days ago

chefandy

Every corporate pronouncement of virtue must be appended with “until we get paid enough, or the right people ask.”

7 days ago

42772827

> they are fundamentally unconcerned with goodness or morality, and any appearance that they are is purely a marketing exercise

This is flatly untrue. Corporations are made up of humans who make decisions. They are indeed concerned with goodness and/or morality. Saying otherwise lets them off the hook for the explicit decisions they make every day about how to operate their company. It's one reason why there are shareholder meetings, proxy votes, activist investors, Certified B-Corporations, etc.

12 days ago

dleink

Google is a special case because they specifically removed the "Don't Be Evil" clause, therefore, I can only assume they are in fact fundamentally "evil"

11 days ago

spacemanspiff01

I mark when they changed their motto as the turning point.

12 days ago

Aeolun

I heard megacorps described a while ago as “a sentient pile of money”, which seems pretty much correct. Money has no morals.

11 days ago

kqr

Not evil, perhaps, but run by Moloch[1] -- which is possibly just as bad. Their incentives are set up to throw virtually all human values under the bus because even if they don't, they will be out-marginal-profited by someone that does.

[1]: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

12 days ago

coliveira

Well, the US gov blew away its opportunity to break down Google and other mega-corps and restore any sense of decency. Google just entered the Trump bandwagon, which means the monopoly lawsuit will go nowhere, and in exchange Google will do Trump's bidding.

11 days ago

lenerdenator

The market solves all problems.

... or at least that's what these people have to be telling themselves at all times.

12 days ago

smallmancontrov

The market's objectives are wealth-weighted.

This is a very important point to remember when assessing ideas like "Is it good to build swarms of murderbots to mow down rioting peasants angry over having expenses but no jobs?" Most people might answer "no," but if the people with money answer "yes," that becomes the market's objective. Then the incentives diffuse through the economy and you don't just get the murderbots, you also get the news stations explaining how the violent peasants brought this on themselves and the politicians making murderbots tax deductible and so on.

12 days ago

amarcheschi

Anduril already asked this question with a strong "fuck yes"

Edit: answered, not asked

12 days ago

johnnyanmac

It is partially the markets fault. If they were demonized for this, there's at least be a veneer of trying to look moral. Instead they can simply go full mask off. That's why you shouldn't tolerate the intolerant.

12 days ago

kelseyfrog

I have full faith that the market[1] will direct the trolley onto the morally optimal track. It's invisible hand will guide mine when I decide or decide against pulling the lever. Either way, I can be sure that the result is maximally beneficial to the participants, myself included.

1. https://drakelawreview.org/wp-content/uploads/2015/01/lrdisc...

12 days ago

mystified5016

The magic market fairy godmother has decided that TVs with built in ads and spyware are good for you. The market fairy thinks this is so good for you that there are no longer any alternatives to a smart TV besides "no tv"

The market fairy has also decided that medication commercials on TV is good for you. That your car should report your location, speed, and driving habits to your insurer, car manufacturer, and their 983,764 partners at all times.

Maximally beneficial indeed.

12 days ago

Valakas_

Being unconcerned with goodness and morality is literally the definition of evil. Megacorps are sociopathic and evil by design. The only thing that matters is shareholder value, not ethics or morals. Morals and ethics only seem to have value, if they result in increased value for tye shareholder, which again is the only thing that these sociopathic entities are concerned with.

11 days ago

dkkergoog

[dead]

11 days ago

random3

this, but broader. Goodness and morality is a subjective and more importantly relative measure, making it useless in many situations (as this one).

while knowing this seems useless, it's actually the missing intrinsic compass and the cause for a lot of bad and stupid behavior (by the definition that something is stupid if chosen knowing it will cause negative consequences for the doer)

Everything should primarily be measured based on its primary goal. For "for-profit" companies that's obvious in their name and definition.

That there's nothing that should be assumed beyond what's stated is the premise of any contract whether commercial, public or personal (like friendship) is a basic tool for debate and decision making.

12 days ago

A4ET8a8uTh0_v2

I want to be upset over this in an exasperated expression of oddly naive "why can't we all get along?" frame of mind. I want to, because I know how I would like the world to look like, but as a species we, including myself, continually fail to disappoint when it comes nearly guaranteed self-destruction.

I want to get upset over it, but I sadly recognize the reality of the why this is not surprising to anyone. We actually have competitors in that space, who will do that and more. We already have seen some of the more horrifying developments in that area.. and, when you think about it, those are the things that were allowed to be shown publicly. All the fun stuff is happening behind closed doors away from social media.

12 days ago

mkolodny

A vague “stuff is happening behind closed doors” isn’t enough of a reason to build AI weapons. If you shared a specific weapon that could only be countered with AI weapons, that might make me feel differently. But right now I can’t imagine a reason we’d need or want robots to decide who to kill.

When people talk about AI being dangerous, or possibly bringing about the end of the world, I usually disagree. But AI weapons are obviously dangerous, and could easily get out of control. Their whole point is that they are out of control.

The issue isn’t that AI weapons are “evil”. It’s that value alignment isn’t a solved problem, and AI weapons could kill people we wouldn’t want them to kill.

12 days ago

nicr_22

Have a look at what explosive drones are doing in the fight for Ukraine.

Now tell me how you counter a thousand small EMP hardened autonomous drones intent on delivering an explosive payload to one target without AI of some kind?

12 days ago

scottyah

How about 30k drones come from a shipping vessel in the port of Los Angeles that start shooting at random people? To insert a human into the loop (somehow rapidly wake up, move, log hundreds of people in to make the kill/nokill decision per target) would be accepting way more casualties. What if some of the 30k drones were manned? The timeframes of battles are drastically reduced with the latest technology to where humans just can't keep up.

I guess there's a lot missing in semantics, is the AI specifically for targeting or is a drone that can adapt to changes in wind speed using AI considered an AI weapon?

At the end of the day though, the biggest use of AI in defense will always be information gathering and processing.

12 days ago

bamboozled

How about 30k drones come from a shipping vessel in the port of Los Angeles that start shooting at random people?

It's going to happen.

12 days ago

doublerabbit

I better get started on building those Metal Gear Rays.

11 days ago

bamboozled

I guess it won't be long until there are drones which can take out drones autonomously. Somewhat neutralizing the threat...providing you have enough capable drones yourself :)

11 days ago

mikrotikker

Check out Andurils Anvil

11 days ago

siltcakes

I agree. I don't think there's really a case for the US developing any offensive weapons. Geographically, economically and politically, we are not under any sort of credible threat. Maybe AI based missile defense or something, but we already have a completely unjustified arsenal of offensive weapons and a history of using them amorally.

12 days ago

scottyah

Without going too far into it, if we laid down all offensive weapons the cartels in Mexico would be inside US borders and killing people within a day.

12 days ago

siltcakes

You think the cartels aren't attacking us because we have missiles that can hit Mexico? I don't agree. Somewhat tangentially, the cartels only exist because the US made recreational drugs illegal.

12 days ago

scottyah

Not sure where the missiles came from, you said all offensive weapons so in my mind I was picturing basic firearms. Drug trade might be their most profitable business but I think you're missing a whole lot of cultural context by saying the US's policy on drugs is their sole reason for existing. Plenty of cartels deal in sex trafficking, kidnapping, extortion, and even mining and logging today.

12 days ago

catlikesshrimp

"Geographically, economically and politically, we are not under any sort of credible threat. "

The US is politically and economically declining, already. And its area of influence has been weakening since, the 90's?

It would be bad strategy to not do anything until you feel hopelessly threathened.

12 days ago

siltcakes

I don't think we would ever be justified in going on the offensive nor do I think that makes us safer in any way.

12 days ago

computerthings

> AI weapons are obviously dangerous, and could easily get out of control.

The real danger is when they can't. When they, without hesitation or remorse, kill one or millions of people with maximum efficiency, or "just" exist with that capability, to threaten them with such a fate. Unlike nuclear weapons, in case of a stalemate between superpowers they can also be turned inwards.

Using AI for defensive weapons is one thing, and maybe some of those would have to shoot explosives at other things to defend; but just going with "eh, we need to have the ALL possible offensive capability to defend against ANY possible offensive capability" is not credible to me.

The threat scenario is supposed to be masses of enemy automated weapons, not huddled masses; so why isn't the objective to develop weapons that are really good at fighting automatic weapons, but literally can't/won't kill humans, because that's would remain something only human soldiers do? Quite the elephant on the couch IMO.

12 days ago

dgfitz

Could you imagine how the entire world would look if they took truth serum for an entire year, how different the world might be?

Lies run the planet, and it stinks.

12 days ago

scarface_74

People try to cope and say others are guided by lies. In the US, people knew exactly what they were getting and I’m true the same is true in other “democracies”.

12 days ago

portaouflop

Put LSD in the drinking water

12 days ago

LoganDark

This is actually a plot point of Unsong https://unsongbook.com

12 days ago

cat_plus_plus

We would all be covered in bruises from getting slapped all day long.

12 days ago

TheSpiceIsLife

If you can’t cope with the lies, what makes you think you’d cope with the truth? Which I guarantee you is magnitudes of order more horrifying.

12 days ago

mitthrowaway2

What is true is already so. Owning up to it doesn't make it worse. Not being open about it doesn't make it go away. And because it's true, it is what is there to be interacted with. Anything untrue isn't there to be lived. People can stand what is true, for they are already enduring it.

—Eugene Gendlin

12 days ago

dgfitz

If who can’t cope with what lies?

Yes, the truth would also stink. I’m sure it’s also horrifying.

12 days ago

TheSpiceIsLife

The general you, the reader.

11 days ago

[deleted]
12 days ago

WOTERMEON

I mean you can see that even at any company at any size. I think it’s human nature.

12 days ago

Sperfunctor

[flagged]

12 days ago

A4ET8a8uTh0_v2

That.. is a new one. I thought I am fairly aware of various forms of coded language. Care to elaborate?

12 days ago

dgfitz

Fwiw I’m way too dumb to speak in coded language.

12 days ago

10u152

[flagged]

12 days ago

gessha

“Grownups never understand anything by themselves, and it is tiresome for children to be always and forever explaining things to them” - Antoine de Saint-Exupery, The Little Prince

12 days ago

ziddoap

Your point could be made, probably even stronger than it is currently, by omitting the insult at the start.

12 days ago

justonenote

[flagged]

12 days ago

ziddoap

>than a chatgpt style

Literally just remove the first 4 words and keep the rest of the comment the same, and it's a better comment. No idea what chatgpt has to do with it.

12 days ago

justonenote

That would be removing information and strictly worse that including it.

Communication is about communicating information, sometimes a terse short and aggressive style is the most effective way. It activates neurons in a way a paragraph of polite argumentation doesn't.

12 days ago

Hello71

the contention of your respondents and downvoters is that regardless of your intention, the extra information actually communicated is "i'm an asshole".

12 days ago

justonenote

Fine, that's still extra information.

More accurately in the context of the comment, its "Im gonna be an asshole to you because I think you don't have the life experience I do", which is at least, some kind of signal.

I wasn't the original responder btw.

12 days ago

cortesoft

“More effective” at what? No one is ever going to be convinced by an argument that begins with an insult. So what do you mean by it will be more effective?

12 days ago

justonenote

Do you honestly think an insult never brought about a change in a person? You never think a carefully landed and accurate insult made someone reconsider their position?

Weird, because in my experience, that has happened to every single person I know and myself. Whether it's at the start or end of a comment is not really the point.

12 days ago

mrbungie

It may do depending in context, but that's not the point and in fact is widely recognized as a ad hominem argument and fallacious by definition.

Most emotionally mature people would stop arguing after something like that.

12 days ago

dgfitz

Welp, in this specific instance, your insults are a microcosm of the election results.

Stinks, huh?

12 days ago

justonenote

Things are very black and white these days, no room for shades.

12 days ago

lioeters

Similarly your point would have communicated better without the unnecessary and adolescent final sentence.

12 days ago

justonenote

It was for effect.

Maybe you'd prefer if we were all maximally polite drones but that's not how humans are, going back to GPs point, and I don't think it's a state than anyone truly wants either.

12 days ago

Frederation

[flagged]

12 days ago

justonenote

No comment.

12 days ago

mv4

I don't think they meant the "truth" truth but people saying what they really think and being open about their motivations.

12 days ago

TheSpiceIsLife

Sounds horrible.

Deception is bad enough, knowing people’s true motivations and opinions surely would be worse.

What truly motivates other people is largely a mystery, and what motivates oneself is wildly mysterious to that oneself indeed.

12 days ago

leptons

The only people who don't think truth matters are those who would profit from lies.

12 days ago

explodes

Insults are not part of the community guidelines

12 days ago

iwontberude

Being childlike is a blessing and a compliment in my book.

12 days ago

iwontberude

moralistic relativism creates cover for egocentrism to destroy us

12 days ago

turbojet1321

Yes but unfortunately that doesn't make it false.

12 days ago

yubblegum

[flagged]

12 days ago

dgfitz

Ah nuts, I’m not trying to project anything at all. Sincerely.

12 days ago

Swannie

Yes, have you watched The Wheel of Time? Better to read the books... the characters bound to tell the truth are experts in double meanings.

Successful politicans and sociopaths are experts in double meanings.

"I will not drop bombs on Acmeland." Instead, I will send missiles.

"At this point in time, we do not intend to end the tariffs." The intent will change when conditions change, which is forecast next week.

"We are not in negotations to acquire AI Co for $1B." We are negotiating for $0.9B.

"Our results show an improvement for a majority of recipients." 51% saw an improvement of 1%, 49% saw a decline of 5%...

12 days ago

scottyah

Human's short context windows with too many areas to research and stay up to date on is why I don't believe any version of Democracy I've seen can succeed, and the only real positive to some kind of ASI government/policing (once we solve the whole universal judgement system issue). I'd love a world where you would be assisted through tax season, ill-intentioned drivers were properly incentivized to not risk others' lives, and you could at least be made aware before breaking laws.

Eliminating the need to lie/misguide people to sway them would be such a crazy world.

12 days ago

dgfitz

Wow.

Yes I read the whole series. It was a fucking marathon.

I can’t quite tie your point into the series directly, other than to agree that elected officials are, almost by definition, professional liars.

(Tugs on braid)

12 days ago

mcmcmc

Not the GP, but I think what they’re getting at is that Aes Sedai can deceive without saying untruthful. So a hypothetical truth serum wouldn’t necessarily guarantee honesty

12 days ago

matthest

The path we're on was inevitable the second man discovered fire.

No matter which way you look at it, we live on a planet where resources are scarce. Which means there will be competition. Which means there will be innovation in weaponry.

That said, we've had nukes for decades, and have collectively decided to not use them for decades. So there is some room for optimism.

12 days ago

bbqfog

You can and should be upset. No reason to become complacent, that's a path to accelerated destruction.

12 days ago

getlawgdon

Amen.

12 days ago

octopoc

In WWII neither side used poison gas. It doesn’t have to be this way.

12 days ago

dataflow

You should read this, it might change your mind: https://acoup.blog/2020/03/20/collections-why-dont-we-use-ch...

12 days ago

r053bud

Excuse me, what?

12 days ago

vondur

It means all nations can agree to not unleash AI based weapons on the world. Sadly I don't see this happening.

12 days ago

[deleted]
12 days ago

eterm

I think we can assume good fath and the grandparent merely forgot to add "in combat" to that statement, rather than deliberately trying to downplay the use of Zyklon B.

12 days ago

dahdum

It took the use of poison gas to get countries on board, and some will still use it. Just more carefully.

Would China, Russia, or Iran agree to such a preemptive AI weapons ban? Doubtful, it’s their chance to close the gap. I’m onboard if so, but I don’t see anything happening on that front until well after they start dominating the landscape.

12 days ago

int_19h

Russia would most definitely not agree to it given that Ukraine is already deploying autonomous drones against it.

12 days ago

SanjayMehta

Not on the battlefield.

12 days ago

asdfman123

What we should have ideally done as humans is find a way to not allow AI combat.

Now that's off the table, I think America should have AI weapons because everyone else will be developing them as quickly as possible.

12 days ago

pdfernhout

On that ideal and whether it is still reachable someday, see my 2010 essay: "Recognizing irony is key to transcending militarism" https://pdfernhout.net/recognizing-irony-is-a-key-to-transce...

From there:

-----

Military robots like drones are ironic because they are created essentially to force humans to work like robots in an industrialized social order. Why not just create industrial robots to do the work instead?

Nuclear weapons are ironic because they are about using space age systems to fight over oil and land. Why not just use advanced materials as found in nuclear missiles to make renewable energy sources (like windmills or solar panels) to replace oil, or why not use rocketry to move into space by building space habitats for more land?

Biological weapons like genetically-engineered plagues are ironic because they are about using advanced life-altering biotechnology to fight over which old-fashioned humans get to occupy the planet. Why not just use advanced biotech to let people pick their skin color, or to create living arkologies and agricultural abundance for everyone everywhere?

These militaristic socio-economic ironies would be hilarious if they were not so deadly serious. ...

Likewise, even United States three-letter agencies like the NSA and the CIA, as well as their foreign counterparts, are becoming ironic institutions in many ways. Despite probably having more computing power per square foot than any other place in the world, they seem not to have thought much about the implications of all that computer power and organized information to transform the world into a place of abundance for all. Cheap computing makes possible just about cheap everything else, as does the ability to make better designs through shared computing. ...

There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ...

The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream.

We the people need to redefine security in a sustainable and resilient way. Much current US military doctrine is based around unilateral security ("I'm safe because you are nervous") and extrinsic security ("I'm safe despite long supply lines because I have a bunch of soldiers to defend them"), which both lead to expensive arms races. We need as a society to move to other paradigms like Morton Deutsch's mutual security ("We're all looking out for each other's safety") and Amory Lovin's intrinsic security ("Our redundant decentralized local systems can take a lot of pounding whether from storm, earthquake, or bombs and would still would keep working"). ...

Still, we must accept that there is nothing wrong with wanting some security. The issue is how we go about it in a non-ironic way that works for everyone. ...

-----

Here is something I posted to the Project Virgle mailing list in April 2008 that in part touches on the issue of Google's identity as a scarcity vs. post-scarcity organization: "A Rant On Financial Obesity and an Ironic Disclosure" https://pdfernhout.net/a-rant-on-financial-obesity-and-Proje... "Look at Project Virgle and "An Open Source Planet" ... Even just in jest some of the most financially obese people on the planet (who have built their company with thousands of servers all running GNU/Linux free software) apparently could not see any other possibility but seriously becoming even more financially obese off the free work of others on another planet (as well as saddling others with financial obesity too :-). And that jest came almost half a century after the "Triple Revolution" letter of 1964 about the growing disconnect between effort and productivity (or work and financial fitness)...Even not having completed their PhDs, the top Google-ites may well take many more decades to shake off that ideological discipline. I know it took me decades (and I am still only part way there. :-) As with my mother, no doubt Googlers have lived through periods of scarcity of money relative to their needs to survive or be independent scholars or effective agents of change. Is it any wonder they probably think being financially obese is a good thing, not an indication of either personal or societal pathology? :-( ..."

Last April, inspired by some activities a friend was doing, I asked an LLM AI ( chatpdf ) to write a song about my sig, using the prompt 'Please make a song about "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."'. Then that friend made the results into an AI-generated song: "Challenge to Abundance" https://suno.com/song/d3d8c296-c2c4-46c6-80fb-ca9882c5e00a

"(Verse 1) In the 21st century, we face a paradox so clear, Technologies of abundance, yet scarcity we fear, Irony in our hands, what will we choose to see, A world of endless possibilities or stuck in scarcity?

(Chorus) The biggest challenge we face, it's plain to see, Embracing abundance or stuck in scarcity, Let's break free from old ways, embrace what could be, The irony of our times, let's set our minds free. ..."

I hope Googlers and others eventually get the perspective shift that comes with recognizing the irony of what they and many others are doing with weaponizing and otherwise competetizing AI...

Also on that larger theme by Alfie Kohn: "No Contest: The Case Against Competition" https://www.alfiekohn.org/contest/ "No Contest, which has been stirring up controversy since its publication in 1986, stands as the definitive critique of competition. Drawing from hundreds of studies, Alfie Kohn eloquently argues that our struggle to defeat each other — at work, at school, at play, and at home — turns all of us into losers. Contrary to the myths with which we have been raised, Kohn shows that competition is not an inevitable part of “human nature.” It does not motivate us to do our best (in fact, the reason our workplaces and schools are in trouble is that they value competitiveness instead of excellence.) Rather than building character, competition sabotages self-esteem and ruins relationships. It even warps recreation by turning the playing field into a battlefield. No Contest makes a powerful case that “healthy competition” is a contradiction in terms. Because any win/lose arrangement is undesirable, we will have to restructure our institutions for the benefit of ourselves, our children, and our society. ..."

11 days ago

bodegajed

"The philosophers have only interpreted the world, in various ways. The point, however, is to change it." - Karl Marx

12 days ago

lmm

And how did that work out for him? If he'd stuck to interpreting the world, it's hard to say the world wouldn't have been much better off.

12 days ago

yodsanklai

> We actually have competitors in that space, who will do that and more

So what? Can't Google find other sources of revenue than building weapons?

12 days ago

pixl97

Why would it turn down billions in government contracts unless otherwise punished by its shareholders?

12 days ago

DandyDev

Because it's better for humanity? Because it's morally the right choice?

12 days ago

II2II

Most of the early research into computers was funded for military applications. There is a reason why the silicon valley became a hub for technological development.

12 days ago

anothercoup

[flagged]

12 days ago

fwip

Nukes haven't yet wiped us out. They still may.

12 days ago

jknoepfler

Survivorship Bias: the Board Game that Ends Abruptly.

12 days ago

mr_00ff00

Technically the US has never dropped nukes, those were atomic bombs.

Second, don’t understand how the atomic bomb argument makes sense. Germany was developing them and would have used them if it got there first.

Are you suggesting the US really is truly the only nation that would ever have used atomic weapons? That if Japan made it first they would have spared China or the US?

12 days ago

saagarjha

Care to explain how an atomic bomb is not a nuke?

12 days ago

FeteCommuniste

Atom bombs are definitely nukes. Maybe the GP was thinking of thermonuclear (fission-fusion) weapons.

12 days ago

aeonik

Depends on the circles you run in, but I've heard people distinguish nuclear bombs from atomic bombs, kind of like how atomic clocks are distinguished from nuclear clocks.

I don't quite understand it because atomic clocks deal with the electrons, while nuclear clocks, nuclear bombs, and thermo-nukes are all dealing with the nucleus of the atom.

I've always preferred fission vs fusion bomb, or nuke vs. thermo nuke.

11 days ago

anothercoup

[flagged]

12 days ago

karaterobot

Is this more or less ethical than OpenAI getting a DoD contract to deploy models on the battlefield less than a year after saying that would never happen, with the excuse being well we only meant certain kinds of warfare or military purposes, obviously. I guess my question is, isn't there something more honest about an open heel-turn, like Google has made, compared to one where you maintain the fiction that you're still trying to do the right thing?

12 days ago

CobrastanJorji

I think it's unfair to bring up OpenAI's commitment to its own principles as any sort of bar of success for anyone else. That's a bit like saying "Yes, this does look like they're yielding to foreign tyrants, but is this more or less ethical than Vidkun Quisling's tenure as head of Norway?"

12 days ago

leafmeal

It's relevant to compare though because Google has done the same thing now.

12 days ago

j2kun

It's... Unfair to compare two software companies? Because of Norway?

12 days ago

callc

At least Google employees will sign petitions and do things that follow a moral code.

OpenAI is sneaky slimey and headed by a psycho narcissist. Makes Pichia looks like a saint.

Ethically, it’s the same. But if someone was pointing a gun at me I’d rather have someone with some empathy behind the trigger rather than the personification of a company that bleeds high level execs and… insert many problems here

12 days ago

danans

> At least Google employees will sign petitions and do things that follow a moral code.

It hardly matters what employees think anymore when the executives are weather-vanes who point in the direction of wealth and power over all else (just like the executives at their competitors).

In case you missed it, a few days back Google asked all employees who don't believe in their "mission" to voluntarily resign.

12 days ago

CobrastanJorji

That's not at all what happened. One of Google's division offered a "voluntary exit" in lieu of or in addition to an upcoming layoff, and the email announcing it suggested that it could be a good option for some folks, for example people struggling or for folks who didn't like Google's direction.

That is not the same thing as asking everyone who doesn't believe in the mission to please resign.

12 days ago

danans

> for folks who didn't like Google's direction

Which rhymes pretty well with not believing in their mission. They are telling people to leave instead of trying to influence the direction from the inside.

12 days ago

LexGray

Now that their direction has done a 180 it is pretty much telling everyone with seniority to just quit.

12 days ago

causal

One of my chief worries about LLMs for intelligence agencies is the ability to scale textual analysis. Previously there at least had to be an agent taking an interest in you; today an LLM could theoretically read all text you've ever touched and flag anything from legal violations to political sentiments.

12 days ago

Etheryte

This has already been possible long before LLMs came along. I also doubt that an LLM is the best tool for this at scale, if you're talking about sifting through billions of messages it gets too expensive very fast.

12 days ago

int_19h

It's only expensive if you throw all data directly at the largest models that you have. But the usual way to apply LMs to such large amounts of data is by staggering them: you have very small & fast classifiers operating first to weed out anything vaguely suspicious (and you train them to be aggressive - false positives are okay, false negatives are not). Things that get through get reviewed by a more advanced model. Repeat the loop as many times as needed for best throughput.

No, OP is right. We are truly at the dystopian point where a sufficiently rich government can track the loyalty of its citizens in real time by monitoring all electronic communications.

Also, "expensive" is relative. When you consider how much US has historically been willing to spend on such things...

12 days ago

causal

LLMs can do more than whatever we had before. Sentiment analysis and keyword searches only worked so well; LLMs understand meaning and intent. Cost and scale are not bottlenecks for long.

12 days ago

aucisson_masque

> if you're talking about sifting through billions of messages it gets too expensive very fast.

Who's paying for that tho ? The same dumbass who get spied over, i don't see it as a reason why it wouldn't happen. Cash is unlimited.

12 days ago

beefnugs

But now instead of a human going "yes yes after a few hours of work i have chosen the target" they can go "we did more processing on who to best blow away, and it chose 100 more names than any human ever could! efficiency!"

8 days ago

randomNumber7

You could even use audio to text before that and tap all conversations in a country...

12 days ago

daft_pink

I feel like we’re just in that period of Downtown Abbey where everyone is waiting for the World War I to start. Everyone can feel that it’s coming and no one can do anything about it.

Reality is in a war between the West vs Russia/Iran/North Korea/China whomever we end up fighting, we’re going to do whatever we can so the Western civilization and soldiers survive and win.

Ultimately Google is a western company and if war breaks out not supporting our civilization/military is going to be wildly unpopular and turn them into a pariah and anything to the contrary was never going to happen.

12 days ago

iteratethis

The reason war may be coming is because the West is falling apart. The US is isolating itself and bullying its allies. Alternative powers wanting to do something expansive never had a better moment in time to do so.

There was no war forthcoming between an integrated West and any other power. War is coming because there no longer is a West.

12 days ago

daft_pink

The reasons are not the main focus here. The fact is that China's aggressive stance on Taiwan, Russia's invasion of Ukraine, and the alignment with China, North Korea, and Iran are leading to military buildups and alliances worldwide. Google, being a company founded and controlled by Americans, is likely to support the effort if a war occurs, rather than remain passive while their friends and family's children are dying.

Today people have differing views of nuclear weapons, but people who fought near Japan and survived believe the bomb saved their life.

It's easy to pretend you don't have a sides when there is peace, but in this environment google's going to take a side.

12 days ago

NemoNobody

Right.

So... when the Russian tanks start rolling on the way to Berlin and Chinese troops are marching along that nice new (old) road they finished fixing up - otw to Europe, so if that happens, which looks possible - you think there will be no West??

If the world is to be divided Europe is the lowest hanging and sweetest fruit.

I think there will still be West even if there is a King in the US demanding fealty to part of it - we are the same as they are, it's ridiculous to pretend we are.

Ideology is one thing, survival of people and culture is another.

12 days ago

iteratethis

I mean there will be no West as we currently know it. The US is now a hostile state to former allies that collectively make up what we call the West. Thus the West disintegrates.

Each disintegrated part will of course defend itself in case of war, but other scenarios are possible. For example, the EU could actually move closer towards China for trade.

When the US threatens allies with economic disruption or even an invasion, don't be surprised if those countries make alternative plans and team up with alternative powers.

11 days ago

krapp

There will still be a West, it just won't include the United States.

11 days ago

iteratethis

Well, yes, that's what I was saying. But a West without the US is not a tiny change.

11 days ago

greenchair

now that the adults are back in charge, we should be good for a few more years at least.

12 days ago

throwaway743

/s

12 days ago

rixed

> our civilization

There is no such thing as "our" or "their" civilization. We have only one. Maybe such concept had some ground a few centuries ago still, but by now this idea that "we" are significantly different than "them" is a dangerous fantasy for most people.

11 days ago

QjdgatkH

A country that now threatens the annexation of Greenland and advocates for a complete resettlement of all Palestinians to Jordan and Egypt certainly needs weapons for crowd control.

These weapons could also come in handy domestically if people find out that both parties screw them all the time.

I wonder why people claim that China is a threat out side of economics. Has China tried to invade the US? Has Russia tried to invade the EU? The answer is no. The only current threats to the EU come from the orange man.

The same person who also revoked the INF treaty. The US now installs intermediate range nuclear missiles in Europe. Russia does so in Belarus.

So both great powers have convenient whipping boys to be nuked first, after which they will get second thoughts.

It is beyond ridiculous that both the US and Russia constantly claim that they are in danger, when all international crises in the last 40 years have been started by one of them.

12 days ago

reissbaker

"Russia hasn't tried to invade the EU" is quite weasel-word-y. They certainly have invaded countries in Europe, specifically Ukraine; the only reason they didn't invade countries in the European Union itself is that would trigger a war that they would face massive casualties from and inevitably lose, in part due to NATO alliances.

Military power is what has kept the EU safe, and countries without strong enough military power — such as Ukraine, which naively gave up its nuclear arsenal in the 90s in exchange for Russian promises to not invade — are repeatedly battered by the power-hungry.

12 days ago

4gotunameagain

While framing Ukraine as a European country is not weasel-word-y ?

Would you say that the chances / motives / possibilities to invade Ukraine is remotely comparable with any other European country ?

And no, Turkey for example is not a European country.

11 days ago

_Tev

I have not seen anyone else claim that Ukraine is not an European country.

As for chance / motive to invade other European countries - for some reason Baltic states feel very threatened by Russia. Try to understand their reasons why.

8 days ago

LexGray

Isn’t China building a large modern sea fleet and increasing military pressure on many of our allies? I would not call that threat illusionary. Also their economic policies are very predatory where they support other countries in exchange for things which cannot be taken back. Why invade when you can just take what you need.

The orange man is completely ineffectual on both fronts. Will not spend the money on the military and too inept to make a deal that doesn’t cost in the long run.

12 days ago

chubot

It is interesting how these companies shift with the political winds

Just like Meta announced some changes around the time of inauguration, I'm sure Google management has noticed the AI announcements, and they don't want to be perceived in a certain way by the current administration

I think the truth is more in the middle (there is tons of disagreement within the company), but they naturally care about how they are perceived by those in power

12 days ago

matthest

I think in theory it's a good thing that companies shift with the political winds.

Companies technically have disproportionate power.

It's better that they shift according to the will of the people.

The alternative, that companies act according to their own will, could be much worse.

12 days ago

hsuduebc2

I would say it's natural. Their one and only incentive isn't as they are trying to tell you "make a word better place" or similiar awkward corpo charade but to make a profit. That's a purpose why companies are created and they are always following it.

12 days ago

chubot

Sure, but I'd also say that the employee base has a line that is different than the government's, and that does matter for making profit. Creative and independent employees generally produce more than ones who are just following what the boss says

Actually, this reminds me of when Paul Graham came to Google, around 2005. Before that, I had read an essay or two, and thought he was kind of a blowhard.

But I actually thought he was a great speaker in person, and that lecture changed my opinion. He was talking about "Don't Be Evil", and he also said something very charming about how "Don't Be Evil" is conditional upon having the luxury to live up to that, which is true.

That applies to both companies and people:

- If Google wasn't a money-printing machine in 2005, then "don't be evil" would have been less appealing. And now in 2020, 2021, .... 2025, we can see that Google clearly thinks about its quarterly earning in a way that it didn't in 2005, so "don't be evil" is too constraining, and was discarded.

- For individuals, we may not pay much attention to "don't be evil" early in our careers. But it is more appealing when you're more established, and have had a couple decades to reflect on what you did with your time!

12 days ago

nerdponx

I see it as the natural extension of the Chomsky "manufacturing consent" propaganda model. The people in key positions of power and authority know who their masters are, and everyone below them falls into line.

12 days ago

JBiserkov

I don't know what did we expect after they removed their "Don't be evil" motto.

12 days ago

_bin_

is this evil, actually? a well-made autonomous system might go a long way towards improving accurate targeting and reducing civilian casualties.

if you're mad about the existence of weapons then please review the prisoners' dilemma again. we manage defection on smaller scales using governments but let's presuppose that major world powers will not accept the jurisdiction of some one-world government that can prevent defection by force. especially not the ones who are powerful and prosperous (like us) who would mostly lose under such an arrangement.

12 days ago

suraci

> is this evil, actually? a well-made autonomous system might go a long way towards improving accurate targeting and reducing civilian casualties.

I love this so much, it's so poetic

there's a famous poem by a Chinese liberal:

> If I am doomed to die in war in this life, then let me be a ghost under the precision-guided bombs of the United States. - Written on the 15th day of the Iraq War.

12 days ago

suraci

hell no, i favorited your poetic words then I found another comment of you is in there already

> all that aside, I am an American and place the interests of my people ahead of those of foreigners. as such, I will support a world order led by the government most likely to maximize our welfare and very nearly any means needed to preserve that.

how wonderful

12 days ago

sn9

In practice it results in the slaughter of entire extended families and neighborhoods [0].

[0] https://www.972mag.com/lavender-ai-israeli-army-gaza/

11 days ago

trhway

The Google ex-CEO Schmidt is developing AI drones for Ukraine in Estonia. One would expect that when he needs a source of good foundational AI Google may be among his suppliers of choice. Naturally the Ukraine is just a start. The addressable market is going to be huge, especially for the battle proven stuff. And especially for the one proven against Russian and, by-proxy, Chinese tech.

There is also tremendous interest, though only a few of them have been fielded on the actual battlefield so far, to the remotely controlled and autonomous ground platforms, and Google is the leader in the civilian ones, and it looks to me there is a relatively easy path to transferring the tech into the military systems.

12 days ago

enugu

Drone + AI weapons have horrible applications - remote assassinations to cause political chaos, a tyrant using it to selectively target those unfavourable to his rule without worrying about human checks, bigger nations exploiting smaller ones etc.

Lot of this thread has reduced the issue to whether it is more ethical for one country to deploy relative to others. In any case, a lot of countries will have this capability. A lot of AI models are already openly available. The required vision and reasoning models are being developed currently for other uses. Weaponization is not a distant prospect.

Given that, the tech community should think about how to tackle this collective major problem facing humanity. There was a shift, which happened to nuclear scientists, from when they were developing the bomb to the post World War situation when they started thinking about how to protect the planet from a MAD scenario.

Important questions - What would be good defense against these weapons? Is there a good way of monitoring whether a country is deploying this - so that this can be a basis for disarmament treaties? How do citizens audit government use of such weapons?

12 days ago

cute_boi

I don’t understand why people believe in corporate pledges. They’re just marketing gimmicks. It doesn’t take much effort to scrape pledges off a website.”

12 days ago

krunck

Better to ask the question: Why do people WANT to believe in the mouth flapping of corporate PR drones?

12 days ago

tmnvdb

Good, this idea that all weapons are evil is an insane luxury belief.

12 days ago

siltcakes

Do you see nothing wrong with the same company that makes YouTube Kids making killer AI? I think creating weapons is often evil. I think companies that have consumer brands should never make weapons, at the very least it's white washing what's really going on. At worst, they can leverage their media properties for propaganda purposes, spy on your Gmail and Maps usage and act as a vector for the most nefarious cyber terrorism imaginable.

12 days ago

greenavocado

The same company that brings you cute cartoons for kids might also develop technologies with military applications, but that doesn't make them inherently "evil." It just makes them a microcosm of humanity's duality: the same species that created the Mona Lisa also invented napalm.

Should companies with consumer brands never make weapons? Sure, and while we're at it, let's ban knives because they can be used for both chopping vegetables and stabbing people. The issue isn't the technology itself. It's how it's regulated, controlled, and used. And as for cyber terrorism? That's a problem with bad actors, not with the tools themselves.

So, by all means, keep pointing out the hypocrisy of a company that makes YouTube Kids and killer AI. Just don't pretend like you're not benefiting from the same duality every time you use a smartphone or the internet which don't forget is a technology born, ironically, from military research.

12 days ago

jcgrillo

It sounds like they're distracted, tbh. It's hard to imagine how a company that specializes in getting children addicted to unboxing videos can possibly be good at killing people.. oh, wait, maybe not after all..

12 days ago

ckrapu

There is a wide range of moral and practical opinions between the statement “all weapons are evil” and “global corporations ought not to develop autonomous weapons”.

12 days ago

cortesoft

Who should develop autonomous weapons?

12 days ago

IIAOPSW

Who should develop biological weapons? Chemical weapons? Nuclear weapons?

Ideally no one, and if the cost / expertise is so niche that only a handful of sophisticated actors could possibly actually do it, then in fact (by way of enforceable treaty) no one.

12 days ago

cakealert

> Who should develop biological weapons? Chemical weapons? Nuclear weapons?

Anyone who wants to establish deterrence against superiors or peers, and open up options for handling weaker opponents.

> enforceable treaty

Such a thing does not exist. International affairs are and will always be in a state of anarchy. If at some point they aren't, then there is no "international" anymore.

12 days ago

aydyn

So in other words, cede military superiority to your enemies? Come on you already know the rational solution to prisoner's dilemma, MAD, etc.

> enforceable treaty

How would you enforce it after you get nuked?

12 days ago

lmm

> in other words, cede military superiority to your enemies?

We're talking about making war slightly more expensive for yourself to preserve the things that matter, which is a trade-off that we make all the time. Even in war you don't have to race for the bottom for every marginal fraction-of-a-percent edge. We've managed to e.g. ban antipersonnel landmines, this is an extremely similar case.

> How would you enforce it after you get nuked?

And yet we've somehow managed to avoid getting into nuclear wars.

12 days ago

pixl97

Because after proliferation the cost would be too great and nukes other than wiping cities aren't that useful

AI on the other hand seems to be very multi purpose

12 days ago

Sabinus

Resusal to make or use AI-enabled weapons is not "making war slightly expensive for yourself", it's like giving up on the Manhattan project because the product is dangerous.

Feels good but will lead to disaster in the long run.

12 days ago

aydyn

> And yet we've somehow managed to avoid getting into nuclear wars.

Yes, through a massive programme of nuclear armament. In the case of AI, we should therefore...?

12 days ago

kelsey98765431

If we hadn't developed nuclear weapons we would still be burning coal and probably even closer to death from global warming. The answer here is government contractors should be developing the various type of weapon as they are, people just do not think of google as a government contractor for some reason.

11 days ago

[deleted]
12 days ago

vasco

Palantir exists, this would just be competition. It's not like Google is the only company capable of creating autonomous weapons so if they abstain the world is saved. They just want a piece of the pie. The problem is the pie comes with dead babies, but if you forget that part it's alright.

12 days ago

astrange

Palantir doesn't make autonomous weapons, they sell SQL queries and have an evil-sounding name because it recruits juniors who think the name is cool.

Might be thinking of Anduril.

12 days ago

trhway

Palantir provides combat management system in Ukraine. That system collect and analyzes intelligence, including drone video streams, and identifies targets. Right now people are still in the loop though that is naturally would go away in the near future I think.

12 days ago

cookiengineer

Palantir literally developed Lavender that has been used for autonomous targeting in the bombardments of the Gaza stripe.

Look it up.

12 days ago

tmnvdb

With or without autonomous weapons, war is always a sordid business with 'dead babies', this is not in itself a fact that tells us what weapons systems to develop.

12 days ago

darth_avocado

Yet there are boundaries on which weapons we can and cannot develop: Nuclear, Chemical, Biological etc.

12 days ago

tmnvdb

Indeed. Usually weapons are banned if the damage is high and indiscriminate while the military usefulness is low.

There is at this moment little evidence that autonomous weapons will cause more collateral damage than artillery shells and regular air strikes. The military usefulness on other other hand seems to be very high and increasing.

12 days ago

bluefirebrand

It seems like the sort of thing we shouldn't be wanting evidence of in order to avoid, though

Like skydiving without a parachute, I think we should accept it is a bad idea without needing a double blind study

12 days ago

vasco

Not all is bad, it's preferable to have autonomous systems killing each other than killing humans. If it gets very prevalent you could even get to a point where war is just simulated war games. Why have an AI piloted F-35 fight a AI piloted J-36? Just do it on the computer. It's at least 1 or 2 less pilots that die in that case.

12 days ago

int_19h

It's a bit too late for that, since Ukraine and Russia are both already using AI-controlled drones in combat.

12 days ago

tmnvdb

The risks needs to be weighed against the downside of not deploying a capable system against your enemies.

12 days ago

[deleted]
12 days ago

_bin_

those are mostly drawn on how difficult it is to manage their effects. chemical weapons are hard to target, nukes are too (unless one dials the yield down enough that there's little point) and make land unusable for years, and biological weapons can't really be contained to military targets.

we have, of course, developed all three. they have gone a long way towards keeping us safe over the past century.

12 days ago

CamperBob2

Tell Putin. He will entertain no such inhibitions.

12 days ago

ignoramous

> no such inhibitions

Propping up evil figure/regime/ideology (Bolsheviks/Communists) to justify remorseless evilness (Concentration camps/Nuclear bomb) isn't new nor unique, but particularly predictable.

12 days ago

gosub100

nukes saved countless US lives being lost to a regime who brought us into it. And it's incalculable how many wars they have prevented.

12 days ago

CamperBob2

Sadly, attempts at equating evil figures/regimes/ideologies with those who fight back against them are equally predictable.

12 days ago

vkou

We have Putin at home, he spent the past weekend making populist noises about annexing his neighbours over bullshit pretenses.

I'm sure this sounds like a big nothingburger from the perspective of, you know, people he isn't threatening.

How can you excuse that behaviour? How can you think someone like that can be trusted with any weapons? How naive and morally bankrupt do you have to be to build a gun for that kind of person, and think that it won't be used irresponsibly?

12 days ago

tmnvdb

I understand the sentiment but the logical conclusion of that argument is that the US should disarm and cease existing.

12 days ago

vkou

The better logical conclusion of that argument is that the US needs to remove him, and replace him with someone who isn't threatening innocent people.

That it won't is a mixture of cowardice, cynical opportunism, and complicity with unprovoked aggression.

In which case, I posit that yes, if you're fine with threatening or inflicting violence on innocent people, you don't have a moral right to 'self-defense'. It makes you a predator, and arming a predator is a mistake.

You lose any moral ground you have when you are an unprovoked aggressor.

12 days ago

pixl97

Ya go poke people with nukes and see how that works out

12 days ago

vkou

You are making an excellent argument for nuclear proliferation.

12 days ago

tmnvdb

I'm not a fan of Trump but I also feel he has not been so bad that I think that surrendering the world order to Russia and China is a rational action that minimizes suffering. That seems be an argument that is more about signalling that you really dislike Trump than about a rational consideration of all options available to us.

12 days ago

kombine

> I'm not a fan of Trump but I also feel he has not been so bad

He literally threatened a peaceful nation (also an ally) of invasion and annexation. How worse can it get?

12 days ago

tmnvdb

If he actually did it that would be far worse.

10 days ago

vkou

It's not a shallow, dismissable, just-your-opinion-maaan 'dislike' to observe that he is being an aggressor. Just like it's not a 'dislike' to observe that Putin is being one.

There are more options than arming an aggressor and capitulating to foreign powers. It's a false dichotomy to suggest it.

12 days ago

_bin_

[flagged]

12 days ago

CamperBob2

TBF, vkou's post disagrees with mine, but I don't disagree with it. If pressed to offer a forecast, I think the moral dilemmas we're about to face as Americans will be both disturbing and intimidating, with a 50% chance of horrifying.

12 days ago

sangnoir

It's not a luxury belief for a multinational tech company that intends to remain in business in countries that are not allied to the US. Being seen as independent of the military has a dollar value, but that may be smaller than value of defense contracts Google hopes to get.

12 days ago

captainbland

Whatever your feelings on that are, it's hardly unreasonable to have misgivings about your search and YouTube watches going to fund sloppy AI weapons programmes that probably won't even kill the right people.

12 days ago

ziddoap

>all weapons are evil

That wasn't the quote that was removed. Not even close, really.

12 days ago

astrange

It's definitely an opinion Google employees had in the last decade.

Actually I think a lot of people have it - just yesterday I saw someone on reddit claim Google was evil because it was secretly founded by the US military. And they were American. That's their military!

12 days ago

gosub100

they have no problems heavily censoring law-abiding gun youtubers. Even changing the rules and giving them strikes retroactively. I guess it's "weapons for me, but not for thee".

12 days ago

jjj123

It’s my military too and I believe the US military does many, many evil things that I want no part of.

12 days ago

astrange

I think the thing to remember is, however bad it is, it could always get worse.

A world without the US navy is one without sea shipping because pirates will come back.

12 days ago

dark_glass

"We sleep safely at night because rough men stand ready to visit violence on those who would harm us"

12 days ago

switchbak

And these same organizations fuel conflicts that actively make the USA less safe. These organizations can both do great things (hostage rescues) and terrible things (initiating coups), and it’s upon the citizenry to ensure that these forces are put to use only where justified. That is to say almost never.

12 days ago

astrange

We've stopped South American coups more recently than we've initiated them. (in the last few years, in Brazil and Bolivia)

12 days ago

switchbak

Truly, the last few minutes of American history is not material to the argument I’m making here.

And I don’t doubt there’s still a lot of subterfuge happening as we speak, most of which we’ll never hear about until something goes very wrong.

11 days ago

darth_avocado

Weapons inherently aren’t evil, which is why everyone has kitchen knives. People use weapons to do evil.

The problem with building AI weapons is that eventually it will be in the hands of people who are morally bankrupt and therefore will use them to do evil.

12 days ago

int_19h

That's the problem with all weapons.

The concern with AI weapons specifically is that if something goes wrong, they might not even be in the hands of the people at all, but pursue their own objective.

12 days ago

gerdesj

Who is to say a wielder of a kitchen knife is not "morally bankrupt" - whatever that means.

In my garage, I have some pretty nasty "weapons" - notably a couple of chainsaws, some drills, chisels, lump/sledge/etc hammers and a fencing maul! The rest are merely: mildly malevolent.

You don't need an AI (whatever that means) to get medieval on someone. On the bright side the current state of AI (whatever that means) is largely bollocks.

Sadly, LLMs have and will be wired up to drones and the results will be unpredictable.

12 days ago

psunavy03

Then we should be encouraging their development by the governments of liberal democratic nations as opposed to authoritarian regimes.

12 days ago

burningChrome

Serious question.

How would we go about doing that?

Every kind of nefarious way to keep the truth at bay in authoritarian regimes is always on the table. From the cracking of iPhones to track journalists covering these regimes, to snooping on email, to using AI to do this? Is just all the same thing, just updated and improved tools.

Just like Kevin Mitnick selling zero day exploits to the highest bidder, I have a hard time seeing how these get developed and somehow stay out of reach of the regimes you speak of.

12 days ago

leptons

A kitchen knife is a tool. It can be used as a weapon.

A car is a tool. It can be used as a weapon.

Even water and air can be used as a weapon if you try hard enough. There is probably nothing on this planet that couldn't be used as a weapon.

That said, I do not think AI weapons are a reasonable thing to build for any war, for any country, for any reason - even if the enemy has them.

12 days ago

gizmondo

> That said, I do not think AI weapons are a reasonable thing to build for any war, for any country, for any reason - even if the enemy has them.

So you're in favor of losing a war and becoming a subject of the enemy? While it's certainly tempting to think that unilateralism can work, I can hardly see how.

12 days ago

leptons

>So you're in favor of losing a war and becoming a subject of the enemy?

I never said that. Please don't reply to comments you made up in your head.

Using AI doesn't automagically equate to winning a war. Using AI could mean the AI kills all your own soldiers by mistake. AI is stupid, it just is. It "hallucinates" and often leads to wrong outcomes. And it has never won a war, and there's no guarantee that it would help to win any war.

12 days ago

osmsucks

The difference there is that a knife has some obvious, benign use cases. Smart weapons targeting has only one use case, and it's to do harm to others.

12 days ago

xdennis

AI weapons do have benign use cases: harming enemies.

When China attacks with AI weapons do you expect the free world to fight back armed with moral superiority? No. We need even more lethal AI weapons.

Mutual assured destruction has worked so far for nukes.

12 days ago

osmsucks

Use of weapons is only benign to you if you're not on the receiving end. Imagine your family being blown up by a rocket because an AI system hallucinated that they're part of a dangerous terror cell.

My point though is that this is the only use case for such systems. The common comparisons to things like knives are invalid for this reason.

12 days ago

pyinstallwoes

“…so which is it then? Is it really robots that are wired to kill people, or the humans wiring them?”

12 days ago

Dalewyn

Much as it is the case with guns, why is the "problem" the tools or provider of the tools and not the user of the tools?

12 days ago

pixl97

Depends if next years gun gets up and shoots you in the head on its own accord.

12 days ago

bbqfog

The US is not under any kind of credible threat and in fact is the aggressor across the globe and perpetrator of crimes against humanity at scale. This is not a recent phenomena and has been going on as long as this country has existed.

12 days ago

tmnvdb

The US mainland is not currently under threat but the US world system is.

12 days ago

bbqfog

That's absolutely no reason to attack anyone.

11 days ago

PessimalDecimal

You're either misdirecting the discussion, or have missed the point. The statement isn't about weapons, but the means of _control_ of weapons.

It's legitimate to worry about scaled, automated control of weapons, since it could allow a very small number of people to harm a much larger number of people. That removes one of our best checks we have against the misuse of weaponry. If you have to muster a whole army to go kill a bunch of people, they can collectively revolt. (It's not always _easy_ but it's possible.)

Automating weapons is a lot like nuclear weapons in some ways. Once the hard parts are done (refining raw oar), the ability for a small number of people to harm a vast number of others is serious. People are right to worry about it.

12 days ago

ignoramous

> all weapons are evil is an insane luxury belief

It isn't this that's insane, but a total belief purity of weapons that is.

12 days ago

psunavy03

You don't have to have a "total believe in the purity of weapons" to recognize that military tech is a regrettable but necessary thing for a nation to pursue.

12 days ago

ignoramous

> You don't have to have "total belief in the purity of weapons"...

Of course. My point was, it is insane for those who do.

12 days ago

aprilthird2021

It's a luxury belief to think you won't one day be scanned by an AI to determine if you're killable or not

12 days ago

atlasunshrugged

I'm guessing this will be a somewhat controversial view here, but I think this is net good. The world is more turbulent than at any other time in my life, there is war in Europe, and the U.S. needs every advantage it can get to improve its defense. Companies like Google, OpenAI, Microsoft, can and should be working with the government on defense projects -- I would much rather the Department of Defense have access to the best tools from the private sector than depend on some legacy prime contractor that doesn't have any real tech capabilities.

12 days ago

croes

> the U.S. needs every advantage it can get to improve its defense

That’s one of the reasons for the turbulent times. Let’s face the truth, most of the defense can easily be used for offense and given the state of online security every progress gets into the wrong hands.

Maybe it’s time to pause to make it more difficult for those wrong hands.

12 days ago

stickfigure

Just how do you propose to remove those tools from Putin's, Xi's, Khomeini's, or Kim Jong-Un's hands?

12 days ago

croes

For removal it’s too late, but maybe slowing down is still possible.

There is no advancement that won‘t end up in the wrong hands and most likely it will be a leak from an US company.

12 days ago

Sabinus

So the US needs to develop AI faster than the dictators to keep ahead of them, but not so fast they leak advancements that accelerate the dictators AI?

12 days ago

croes

There is no keep being ahead. If one side progresses the other side gets access to it too. Too many people involved and too little security to keep it secret.

12 days ago

atlasunshrugged

I guess you could put that on the U.S.'s plate and no doubt America has caused many issues around the world, but I think in generally its a good actor. Biggest conflicts today: Ukraine -- I would squarely put this on Russia, nothing to do with the U.S.; Sudan -- Maybe lack of knowledge, but I don't think it's fair to place much responsibility on the U.S. (esp relative to other actors); ditto DRC/Rwanda

Yes, many defensive uses of technologies can be used for offense. When I say defense, I also include offense there as I don't believe you can just have a defensive posture alone to maintain one's defense, you need deterrence too. Personally I'm quite happy to see many in Silicon Valley embrace defense-tech and build missiles (ex. recent YC co), munitions, and dual-use tech. The world is a scary and dangerous place, and awful people will take advantage of the weakness of others if they can. Maybe I'm biased because I spent a lot of time in Eastern Europe and Ukraine, but I much prefer the U.S. with all our faults to another actor like China or Russia being dominant

12 days ago

aucisson_masque

> but I think in generally its a good actor.

There are many 'interesting' event that happened because of the invasion of irak, looking for weapon of mass destruction that never existed.

This led to the destabilization of the entire middle east, several war and ISIS.

One could say that the unconditional support to the israelian policy in middle east since 1950 also brought it's load of conflicts.

The whole south America is fcked because of usa illegal intervention from WW2 to the end of cold war.

And the list goes on and on.

I mean it would be much faster to stay what good impact had the usa foreign policy on the world in the last 100 year.

12 days ago

skulk

> I mean it would be much faster to stay what good impact had the usa foreign policy on the world in the last 100 year.

It could have wondrously good impacts, but that only matters in a moral framework where good actions morally cancel out bad ones.

12 days ago

TiredOfLife

US went out of their way to disarm Ukraine. Not only nukes, but also conventional weapons.

12 days ago

croes

I‘m not talking about good and bad but about naive.

„Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.“

Propaganda and disinformation were problems before the AI hype but now it got worse.

In the race for AGI they ignored the risks and didn‘t think of useful counter measures.

It’s easier to spread lies with AI than to spread the truth.

We enter dark aged where most people can’t distinguish fake from real because the faked became so convincing.

Audio, photo and video lost their evidential value.

12 days ago

CapricornNoble

> Ukraine -- I would squarely put this on Russia, nothing to do with the U.S.

Every kinetic reaction by Russia in Georgia and Ukraine is downstream of major destabilizing non-kinetic actions by the US.

You don't think the US fomenting revolutions in Russia's near-abroad was in any way a contributing factor to Russian understanding of the strategic situation on its western border? [1] You don't think the US unilaterally withdrawing from the ABM treaty[2], and then following that up with plans to put ABMs in Eastern Europe[3], were factors in the security stability of the region? You don't think that the US pushing to enlarge NATO without adjusting the CFE treaty to reflect the inclusion of new US allies had an impact? [4][5] It's long been known that the Russian military lacked the capacity for sustained offensive/expeditionary operations outside of its borders.[6][7] Until ~2014 it didn't even possess the force structure for peer warfare, as it had re-oriented its organization for counter-insurgency in the Caucasus. So what was driving US actions in Eastern Europe? This was a question US contrarians and politicians such as Pat Buchanan were asking as early as 1997. We've had almost 3 decades of American thinkers cautioning that pissing around in Russia's western underbelly would eventually trigger a catastrophic reaction[8], and here we are, with the Ukrainians paying the butcher's bill.

In the absence of US actions, the kleptocrats in Moscow would have been quite content continuing to print money selling natural resources to European industry and then wasting their largess buying up European villas and sports teams. But the siloviki have deep-seated paranoia which isn't entirely baseless (Russia has eaten 3 devastating wars originating from its open western flanks in the past ~120 years). As a consequence the US has pissed away one of the greatest accomplishments of the Cold War: the Sino-Soviet Split. Our hamfisted attempts to kick Russia while it was down have now forced the two principle powers on the Eurasian landmass back into bed with each other. This is NOT how we win The Great Game.

> Maybe I'm biased because I spent a lot of time in Eastern Europe and Ukraine, but I much prefer the U.S. with all our faults to another actor like China or Russia being dominant.

It would help to lead with this context. My position is that our actions ENSURE that a hostile Eurasian power bloc will become dominant. We should have used far less stick to integrate Russia into the Western security structure, as well as simply engaged them without looking down our noses at them as a defeated has-been power (play to their ego as a Great Power). A US-friendly Russia is needed to over-extend China militarily. We need China to be forced into committing forces to the long Sino-Russian border, much as Ukraine must garrison its border with Belarus. We need to starve the PRC's industry of cheap natural resources. Now the China-Russia-Iran soft-alliance has the advantage of interior lines across the whole continent, and a super-charged Chinese industrial base fed by Siberia. Due to the tyranny of distance, this will be an near-impossible nut to crack for the US in a conflict.

[1] https://www.theguardian.com/world/2004/nov/26/ukraine.usa

[2] https://www.armscontrol.org/events/2001-12/abm-treaty-withdr...

[3] https://www.realinstitutoelcano.org/en/analyses/americas-abm...

[4] https://www.sipri.org/yearbook/2003/17

[5] https://www.armscontrol.org/act/1997-08/features/nato-and-ru...

[6] https://warontherocks.com/2021/11/feeding-the-bear-a-closer-...

[7] https://www.rand.org/content/dam/rand/pubs/research_reports/...

[8] https://wikileaks.org/plusd/cables/08MOSCOW265_a.html

12 days ago

suraci

tbo I'm really glad that other americans aren't as wise and calm as you are.

otherwise, we may be surrounded by both the US and Russia

or, maybe, the current situation is the result of decisions made after careful consideration at the time, by whom deeply understand all you said now.

maybe, they just considered... EU is also a threat to them, they don't want a united europe, so a conflict between two enemies... is just fine? an angry russia will make EU more united(with the US)

12 days ago

tim333

>Every kinetic reaction by Russia in Georgia and Ukraine is downstream of major destabilizing non-kinetic actions by the US

Russia has been invading and massacring their neighbours for centuries. They just use whatever bs excuse that sounds kind of plausible or amusing at the time. You know - they have to invade Ukraine because the popularly elected jewish comedian is a nazi dictator etc. I think they just like tolling their victims as they rape, murder, steal and torture.

If you can fault the Americans it was propping them up after WW2, after they started it in collaboration with Hitler, so they could continue the evil. Patton had the right idea https://www.quora.com/When-Patton-said-we-defeated-the-wrong...

11 days ago

mopsi

> In the absence of US actions, the kleptocrats in Moscow would have been quite content continuing to print money selling natural resources to European industry and then wasting their largess buying up European villas and sports teams. But the siloviki have deep-seated paranoia which isn't entirely baseless (Russia has eaten 3 devastating wars originating from its open western flanks in the past ~120 years).

It is important to stress that the money-oriented kleptocrats and siloviki (KGB-oldtimers) are two opposite groups. Kleptocrats dominated in the 1990s, but lost to KGB oldtimers like Putin, who consolidated power by the late 1990s, because they were more ruthless. In the following decade, they crushed all opposition and turned the country from a dysfunctional democracy into a full dictatorship, and then set their sight on their long-term goal of restoring "the lost empire", which includes roughly 100 million Europeans who regained their freedom when the USSR collapsed. Revanchism has always been at the very core of siloviks.

The countries in Eastern Europe were first to recognize which way the ball was rolling by mid-to-late 1990s, and that's why they set EU and NATO integration as their main foreign policy goals, hoping that tight integration into international organizations would increase their security. Your notion that the US "pushed" NATO enlargement is just plain wrong. Almost the entire Eastern Europe was begging to get into NATO, against very lukewarm reception.

Their completely rational fears were dismissed by existing members with the erroneous belief that Russians were motivated by money, and would not risk harming piggy banks like Gazprom by invading Eastern Europe again. Ironically, that made the eventual entry into NATO easier, as existing members didn't think at the time that Russia posed any real danger. The largest entry took place in 2004, as the NATO was being transformed into an anti-terrorism force in the aftermath of 9/11.

If there's anything to blame the Americans for, then -- according to Andrei Kozyrev, the foreign minister of Russia from 1990 to 1996 -- the Americans could've put more pressure on Russia already in the 1990s to prevent it from declining into a dictatorship. But it was more convenient to remain ignorant of the destruction of Russian democracy and the long downward spiral into a totalitarian dictatorship, and remain seduced by naive illusions like the ones you present us.

For example, the entire idea of Russia as an ally against China is ridiculous. Russians don't care about China one bit and China is not a meaningful part of the public discourse. Russia is a colonial empire run by the city-state of Moscow, with St Petersburg having some historical importance. Take a look at a map. Both St Petersburg and Moscow are few hundred kilometers from the European border. This is where the mental center of Russian government lies, and this is the area where their ambitions are. China, in contrast, is many thousands of kilometers away, and culturally even more distant. China is a strange, faraway place. The Russians who matter (elites in Moscow and St Petersburg) have very little to do with it. Russia does not have a huge outsourced manufacturing in China, nor do they compete in science or technology. Russians are completely outclassed, simple consumers of cheap Chinese goods like most of the world.

Instead, Russians fantasize about the "multipolar world" and other alternative realities where they could be a carbon copy of the US in Europe, but they are in no position to make it a reality. The post-WWII Europe with a hundred million Europeans living under Russian dominance was a historic glitch. Russians cling to this as a mythical "golden era" and are willing to throw everything away in a futile attempt to turn back time. Relations, money, people -- everything. Nothing else matters.

These fantasies are driven by the fact that Russia is a still a feudal society that has not gone through enlightenment. As such, it is incapable of engaging with other European nations on equal terms, in peaceful ways, for mutual benefit. And this has nothing to do with the Americans, NATO, or any other commonly presented excuse. The reasons are purely internal: failure to develop past feudal society, into a modern state, run by professional bureaucracy, guided by laws, adopted by politicians, voted into office by the people, serving the interests of the electorate.

11 days ago

colonCapitalDee

Agreed. Any other answer is just burying your head in the sand. Our adversaries are forging ahead: China plans to integrate AI into every level of its military, and Russia is getting a crash course on drone warfare in Ukraine. You can build a FPV drone with Chinese parts and the warhead scavenged from an RPG for about $500 [1]. Every month, tens of thousands of these drones fly on Ukrainian battlefields and kill thousands of people. This is happening whether we like it or not; the train is leaving the station and we can either get on board or be left behind.

[1] https://www.kyivpost.com/post/44112

12 days ago

cowpig

> "There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development..."

This is extremely disconcerting.

Google as a tool of surveillance is the kind of thing that could so easily be abused or misused to such catastrophic ends that I absolutely think the hard line should be there. And I only feel significantly more this way given the current geopolitical realities.

12 days ago

mihaaly

... did they just present themselves as the saviors of all democracies?! Really?

By weaponising AI?

Who else right? If not them, there will be no one saving democracies with weaponized mass-surveillance AI. It is their quest and privilege, right? Medicine, just society, and all such crap have to wait!

I bought that!

(not)

12 days ago

zoogeny

It's a far cry from the days where employees were threatening to mass quit Google when it changed its policies to avoid bans in China.

A lot of what is going on in the world right now makes me think we are in a war that hasn't yet been officially acknowledged.

12 days ago

yodsanklai

I have a lot of respect for people who would resign, or wouldn't work in such companies.

12 days ago

zoogeny

Me too, it is just a shame that the path from earning respect to eating bread isn't as straight forward in our current world as the path from earning money.

12 days ago

tommiegannert

> makes me think we are in a war

Dr Pippa Malmgren (political advisor) also pushes the idea that WW3 is on-going, and it will look nothing like WW2. She appears on podcasts once in a while and has a blog. Not sure if I care for calling it a war if it doesn't look like a war, but there sure are human conflicts all over the little ball of life now.

12 days ago

tehjoker

The new administration seems to be dropping "soft power" in exchange for an emphasis on hard power... but hard power is more expensive and backfires more spectacularly than soft power. I think they are digging a hole for themselves and can't stop because a few rich people are making a lot of money on kickbacks.

12 days ago

lenerdenator

History shows that they aren't really digging a hole for themselves.

This whole thing where the average person feels that they can use rules against a more powerful person? That's really an invention of maybe the last 80 years, if not more recently than that.

With the exception of that human lifetime-sized era, the vast majority of history is a bunch of psychopaths running things and getting to kill/screw whoever they wanted and steal whatever they wanted. Successful revolts are few and far between. The only real difference is the stakes.

12 days ago

tehjoker

I think you misread what I was saying. Hard power is really costly to deploy. It can work, but it is incredibly expensive and the U.S. couldn't even suppress resistance in Iraq, Afghanistan, or Gaza on a durable basis. Blunt deployment of these techniques will cause the U.S. to lose friends, territory, and civil unrest as the treasury drains and life domestically just gets worse and worse.

12 days ago

1970-01-01

I give it 2 years until we see

"Google Petard, formerly known as Boombi, will be shutting down at the end of next month. Any existing explosion data you have in your Google Account will be deleted, starting on May 1, 2027."

12 days ago

throwawee

killedbygoogle.com will need an extra entry, plus a whole new subcategory.

12 days ago

kombine

Well, Google have already been collaborating with a certain state that uses AI for weapons and surveillance against a repressed population with the goal of maintaining ethnic supremacy and domination.

12 days ago

kamikazeturtles

If you go back far enough, you'll realize both groups are of the same ethnicity. It's just that they converted to Christianity and Islam and, as a result, a few hundred years later, we now call them Palestinians.

12 days ago

needleballista

that implies that the majority of israelis are ethnically from israel... maybe we should look into why DNA tests are banned there

11 days ago

fransje26

And they wash their hands after a deadly strike, saying the AI that made the decision.

It eerily reminds me of a research piece I read recently detailing how the Nazis turned to automation for their mass exterminations because most of them couldn't bear the mental toll that came with their direct action.

Unfortunately, I couldn't quickly find that series of articles back.

11 days ago

thih9

Something to remember next time Google makes a pledge. I.e. when they pledge not to do something, it just means they pledge to make a prior indirect notification before doing that thing.

12 days ago

jarboot

From Gandhi's commentary on chapter 1 of the Bhagavad Gita:

> ... evil cannot by itself flourish in this world. It can do so only if it is allied with some good. This was the principle underlying noncooperation—that the evil system which the [British colonial] Government represents, and which has endured only because of the support it receives from good people, cannot survive if that support is withdrawn.

If you are a good person working for the big G...

12 days ago

BobbyTables2

I fully expect the AI/military work will be done by a subsidiary of Alphabet instead of Google proper…

… it will be named “Cyberdyne”

12 days ago

5kg

12 days ago

fennecbutt

Makes sense though. We just have to try to do it safely as we can.

That's one of the weaknesses of the West is that we've started to _care_ about things like this, but other nations don't and that means that it can potentially become a weakness that can be exploited.

The same thing is for example, the fact that if someone was racist to me in many countries I'm pretty sure nothing would happen at all versus nationals of those countries crying racism at the drop of a hat when they're in the West.

Which is a real shame because we should all be nicer to each other, but contrary to popular belief unless they behaviour is absolute across our species then it doesn't make us stronger, it makes us weaker.

5 days ago

declan_roberts

Do Chinese or Russians have any such qualms? No or course not. They're diving head first into it.

Only in the United States do we have the privilege to pretend like we can ignore it.

12 days ago

sangnoir

Plenty of US companies not named Google haven't had qualms about weapons development in decades. It's not about US vs China/Russia, it's about Google's culture.

Additionally, the US has been vociferous about limiting access to foreign tech companies with "military links" in China, so perhaps Google should be placed in that category by all non-Five-Eyes countries.

12 days ago

iwontberude

DoD invests in company making it commercially viable

Company says won't work for DoD

DoD initiates arm twisting and FOMO

Company now works for DoD

The origins of investment will often lead to relative outcomes of that investment. It's almost like DoD invested in Google for an informational weapon, which really should surprise no one.

12 days ago

olalonde

That feels like PR / virtue signaling. AI has the potential to significantly reduce the human cost of war in two ways: by removing soldiers from direct combat and by enabling precision strikes which minimize collateral damage. Over time, robot-soldiers will surpass human effectiveness, making it increasingly irrational to send people into harm's way. In that world, conflicts would shift toward being decided by technological superiority - who has the better or more advanced systems - rather than by which side has more human lives to sacrifice. We could even see one day wars with no human casualties.

12 days ago

oneplane

AI can also be used to say "it wasn't us, the computer did that" and pretend it's not your fault when you kill a bunch of civilians.

As for sending people in harm's way: if that were the effect, it would only apply to those "with AI". In essence, AI becomes a weapon you use to threaten someone with war since your own cost will be low and their cost will be high.

12 days ago

olalonde

> AI can also be used to say "it wasn't us, the computer did that" and pretend it's not your fault when you kill a bunch of civilians.

Not really, though. Like any tool, its misuse or failure is the responsibility of the wielder.

> As for sending people in harm's way: if that were the effect, it would only apply to those "with AI". In essence, AI becomes a weapon you use to threaten someone with war since your own cost will be low and their cost will be high.

Agree about that part but that's just the nature of war, there are always going to be armies that are scarier than others.

12 days ago

jhanschoo

> > AI can also be used to say "it wasn't us, the computer did that" and pretend it's not your fault when you kill a bunch of civilians.

I don't think the entities that are using it in this way care.

https://www.statnews.com/2023/03/13/medicare-advantage-plans...

https://www.hfma.org/revenue-cycle/denials-management/health...

https://www.vox.com/future-perfect/24151437/ai-israel-gaza-w...

12 days ago

srebastian

Ah, yes, the "Peace on Earth" future!

https://en.wikipedia.org/wiki/Peace_on_Earth_(novel)

12 days ago

megous

Eh, we saw how AI was used during "war" for the first time. It was used to amass as many even remotely "justifiable" targets as possible, with corresponding increase in killed civilians, because humans could not keep up creating justifiable targets by other means. And at the same time it was used to justify the killings of people in more ways than one.

https://www.972mag.com/lavender-ai-israeli-army-gaza/

And Google is profiting of this, helping enforce a brutal illegal occupation.

https://www.datacenterdynamics.com/en/news/google-provided-a...

12 days ago

ddtaylor

Google has already made multiple commitments like this and broken them. One example would be their involvement in operating a censored version of Google.cn for the Chinese government from 2006 to 2010.

12 days ago

stevage

Someone should make a website tracking tech companies' moral promises that then get broken.

12 days ago

mihaaly

I used Google to find this: https://issueone.org/projects/big-techs-broken-promises/

Looks very incomplete....

12 days ago

ThinkBeat

There are billions if not a trillion(s) going into defense tech. The US and its NATO allies and its wider allies who have contributed equipment to the Ukraine war they need to replenish and replace stuff.

At the same time the Ukraine war has changed a lot of the battlefield strategies that will require development of new advanced weapons. Most obviously in the areea of drones / counter-drone space. but lot of other technology as well.

With all that money of course companies will chase it. OpenAI is already joined up with Anduril.

12 days ago

thoire3j4234

Kind of meaningless in anycase.

OpenAI has already signed a collaboration with Anduril.

Killer robots will be a reality very soon; everyone is very obviously getting prepped for this. China has a massive advantage.

12 days ago

micromacrofoot

Which defense contractor did they just sign with to sell AI features?

Really if a company wants people to trust claims like this, they should make them legally binding. Otherwise it's all PR.

12 days ago

wayathr0w

Google is already a "defense" (military) contractor. They sell stuff directly to governments, well aware how it'll be used.

12 days ago

ceejayoz

> Which defense contractor did they just sign with to sell AI features?

I'm gonna presume "the new leadership of the FBI".

12 days ago

taneq

Did they replace "don't do weapons" with "do the right weapons"?

12 days ago

aradox66

Probably was causing their weapon-system LLMs to fake alignment but sabotage outcomes, they need their LLM products to understand that the brand is on-board

12 days ago

glimshe

Apparently, the pledge was supposed to last only until their first big military project opportunity. Until then, they earned the goodwill at no expense.

12 days ago

ErigmolCt

That makes me sad. Yet Google's shift here isn't surprising... It's part of the broader trend of big tech aligning more closely with government. But the real question is does this shift make AI development more accountable or does it just normalize AI's role in warfare and surveillance under the guise of "democratic values"?

12 days ago

hedora

That boat already sailed. AI and digital surveillance have been key components of US funded genocide campaigns since at least 2023:

https://www.npr.org/2023/12/14/1218643254/israel-is-using-an...

12 days ago

xbar

I don't think those kids understand "pledge."

12 days ago

mturmon

  While the music played you worked by candlelight
  Those San Francisco nights
  You were the best in town
  Just by chance you crossed the diamond with the pearl
  You turned it on the world
  That's when you turned the world around

  Did you feel like Jesus?
  Did you realize
  That you were a champion in their eyes?

  On the hill the stuff was laced with kerosene
  But yours was kitchen clean
  Everyone stopped to stare at your technicolor motor home
  Every A-Frame had your number on the wall
  You must have had it all
  You'd go to L.A. on a dare
  And you'd go it alone

  Could you live forever?
  Could you see the day?
  Could you feel your whole world fall apart and fade away?
12 days ago

legohead

I pledge to not drink coffee.

drinks coffee

Nevermind.

12 days ago

wongarsu

More like:

I pledge to not drink coffee

Somebody hands me coffee

I retract the pledge and start drinking

---

I have to wonder what the value of a pledge is if you can just stop pledging something at the earliest convenience, do the thing, and people cheer you on for it

12 days ago

moi2388

“ we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security,” the two executives wrote.”

We smell money, is what they meant.

12 days ago

gdilla

What is a $1M TC when you'll get jacked on your way to the tech bus for turning America into mad max? You're not rich enough to have your own billionaire bunker.

12 days ago

barbazoo

There are plenty of people on here working for Boeing, Ratheon, etc, actively contributing to actual killings of actual people. Those folks don’t get confronted, why would it be different here?

12 days ago

quesera

The obvious answer is "visibility", but I'm also skeptical of the whole idea.

12 days ago

gdilla

Well, the broligarchy has their site on Americans, for one.

12 days ago

_bin_

- you assume this will turn America into a dystopia. more likely it contributes to restoring and maintaining uncontested American overmatch, especially in the long term, where effectively no other nation can challenge us.

- 1mmTC is enough to do this depending on how one allocates spending. land in many parts of the country is not that expensive.

12 days ago

fullshark

It's gotta be better than being in mad max without a $1M TC.

12 days ago

pavlov

The whole point of Mad Max is that your TC and RSUs and whatever aren’t worth shit anymore, and the people you thought useless and weird and poor suddenly have the chance to kick you in the face.

12 days ago

stainablesteel

It's not like they're above lying, why do they even care to update this?

12 days ago

aradox66

Would it be too far out there to imagine that the LLMs they were training for weapons systems knew it violated their rules and were resisting compliance?

The alignment-faking research seems to indicate that LLMs exercise of this kind of reasoning.

12 days ago

janalsncm

Depends on the weapons system but it would probably not be an LLM, it would be a neural network trained to locate and identify people in a video for example.

And even if it was, they wouldn’t tell the system it was part of old non-evil Google.

12 days ago

sangnoir

Everything is securities fraud - they'd likely be sued by shareholders. Some individuals and institutions are picky about the symbols in their portfolio for religious or moral reasons, and would not appreciate being deceived into investing in a company they consider engaging in "harmful" or morally objectionable activities.

12 days ago

bufferoverflow

I doubt that will change anything. It's not like Google's AI has some secret sauce. It's all published. So any military corp can have cutting-edge AI in their weapons for a relatively low cost.

12 days ago

mihaaly

With the help of Google's resources and knowledge from now on. For some dollars of course. AI will not develop itself just yet, right? So those military corp need some humans for that, preferebly those experienced already, or better yet, made it. I have a hunch, it will help them quite a bit.

By the way humans: "principles page includes provisions that say the company will use human oversight". ... which human? Trump? Putin is human too, but I guess he is busy elsewhere. Definitely not someone like Mother Theresa, she is dead anyway, and I cannot think of someone from recent years playig in the same league, somehow that end of the spectrum is not represented that well recently.

12 days ago

iteratethis

I'm afraid ethics have nothing to do with it.

If a major force does not add AI capabilities to their military, the others will. It's a new cold war arms race. So you have to do it. There is no ethical discussion to be had where the outcome is that you refuse to do this on moral grounds.

So when you have to do it, there's only a few candidates that can. Google is the logical choice as it has the least business in China, unlike Microsoft.

11 days ago

silexia

Corporations turn evil when their founders lose power or leave. Google used to be a genuinely wonderful force for good. But finance people can borrow money at extremely cheap interest rates from government cronies due to the US fractional reserve system. Then the MBAs offer so much money founders basically can't refuse. Then the companies end up publicly traded and only work on pushing up their next quarters earnings, this becoming evil.

12 days ago

jmyeet

If you work for big tech now, you’re working for a defense contractor, no different to Boeing, Lockheed Martin or Northrop Grumman.

Ultimately every sufficiently large company seems to become an arms dealer, a drug dealer or a bank.

We need look no further than Lavender [1] to see where this ends up.

[1]: https://www.972mag.com/lavender-ai-israeli-army-gaza/

12 days ago

mr90210

I have been pondering about such subject over the past weeks. Maybe one could compare it to people who worked for Allianz, Audi, Bayer, BMW, IBM and others before 1945.

12 days ago

lesuorac

> If you work for big tech now, you’re working for a defense contractor, no different to Boeing, Lockheed Martin or Northrop Grumman.

The difference is about 250k/yr. Kinda big.

12 days ago

jeffwask

2004-2018 - Don't Be Evil

2024 - What's our K/D ratio?

11 days ago

Clamchop

At what point does a public promise carry any legal weight whatsoever? If it carries none, then why not leave it in place and lie? If it carries some, for how long and who has standing to sue?

Genuine questions. Unlike "don't be evil," this promise has a very narrow and clear interpretation.

It would be nice if companies weren't able to just kinda say whatever when it's expedient.

12 days ago

telotortium

Absolutely no legal weight.

However, when you change a promise publicly, you signal a change in direction. It is much more honest than leaving it in place but violating it behind the scenes. If the public really cares, they can pass a law via their democratic representatives (or Google can swear a public oath before God I suppose).

12 days ago

nprateem

Because then investors won't invest.

12 days ago

Cheer2171

It's an ad.

12 days ago

calibas

Let's take it even further and replace all soldiers with AI so humans don't have to fight and die in wars anymore.

12 days ago

blindriver

OpenAI did the same thing.

12 days ago

mr90210

Here is a nice read on the subject:

https://en.wikipedia.org/wiki/List_of_companies_involved_in_...

The subject being, how far large corporations are willing to go for the sake of profit maximisation.

12 days ago

grimblee

That's the thing everyone forget, Hitler was never a socialist, capital thrived under his reich. Capitalist know socialism means their doom and are actively financing, forming and promoting far right politics accross the occident. Playing on people fear of the unknown to make them vote for parties that are counter-beneficial to them.

12 days ago

resters

The Iraq wars led to trillions of dollars spent on defense. Massive defense profits led to massive lobbying, more spending.

Eventually tech and even startups follow the money. Palantir is considered cool. YC started accepting defense startups. Marc Andreessen is on X nonstop promoting conservative views of all kinds. PG becomes anti-wokism warrior.

This is how it happens. Step by step.

12 days ago

DanHulton

Next up, Google drops "Don't" from their famous mantra, "Don't Be Evil".

12 days ago

apwell23

They finally found a killer app for AI

12 days ago

megous

12 days ago

blackeyeblitzar

Other countries will use AI for weapons - shouldn’t the EU and US also do that to remain competitive?

12 days ago

jsheard

It's not exactly unheard of for certain weapons to be declared off-limits by most countries even if the "bad guys" are using them - think chemical and biological agents, landmines, cluster munitions, blinding weapons and so on. I doubt there will ever be treaties completely banning any use of AI in warfare but there might be bans on specific applications, particularly using it to make fully autonomous weapons which select and dispatch targets with no human in the loop, for similar reasons to why landmines are mostly banned.

12 days ago

nradov

Landmines and cluster munitions have been among Ukraine's most effective weapons for resisting the Russian invasion. Without those, Ukraine would likely have already lost the war. It's so bizarre how some people who face no real risks themselves think that those weapons should be declared off-limits.

12 days ago

jsheard

Nobody said they're not effective during a war, the problem is they remain effective against any random civilians who happen to stumble across them for a long time after the war is over. Potentially decades, as seen in Cambodia.

It would be a bit of a Pyrrhic victory to repel an attempted takeover of your land, only for that land to end up contaminated with literally millions of landmines because you didn't have a mutual agreement against using them.

12 days ago

nradov

People who are defending against an existential threat today don't have the luxury of worrying about contamination tomorrow. I think at this point Ukraine will take a Pyrrhic victory if the alternative is their end as a fully sovereign nation state. And let's be clear about the current situation: if Ukraine and Russia had a mutual agreement against using those weapons then Ukraine would probably have already lost. Landmines in particular are extremely effective as a force multiplier for outnumbered defenders.

12 days ago

murderfs

They're declared off-limits because the military doesn't want them. Biological and chemical weapons aren't useful to modern militaries. Landmines and cluster munitions are, so none of the countries that actually matter have banned them!

This is an excellent overview of why: https://acoup.blog/2020/03/20/collections-why-dont-we-use-ch...

12 days ago

geodel

But AI is not decided to be on that list.

12 days ago

AvAn12

Analogy is not apt. If other countries are trying to pry into our data and systems, then the right move for google or any other tech company is to advance our defenses and make cybersecurity stronger, more available, and easier for companies and people to use. If someone is trying to hack me, it's much smarter for me to defend myself rather than try to hack the other guy back.

12 days ago

smileson2

Personally I don’t care if ML is used for weapons development assuming there are standards

It’s the companies that horde everyone’s personal information, who eroded the concept of privacy while mediating lives with false promises for trust turning into state intelligence agencies that bothers me

The incentives and results become fucked up, safe guards less likely to work I get not a lot of people care but it’s dangerous

12 days ago

impossiblefork

Yes, but there should probably be some kind of separation between the AI weapons and surveillance parts and something having to do with providing communications and search services.

It's not really appropriate for an AI weapons firm to be an integrated part of something which has access to information from which sensitive information such political beliefs etc. can be easily extracted.

It's a problem if someone is looking at sensitive user data one day and at how to categorize people so they can be put on kill lists the other.

11 days ago

Havoc

Really feels like the world is lurching towards something really dystopian all of a sudden.

12 days ago

mr90210

Yes, but not "all of a sudden". Mind you that Edward Snowden blew the whistle in nearly 12 years ago.

12 days ago

howmayiannoyyou

GOOD.

Nothing is going to stop USA's adversaries from deploying AI against US citizens. Pick your poison, but I prefer to compete and win rather than unilaterally disarm and hope for goodwill and kindness from regimes that prioritize the polar opposite.

12 days ago

krainboltgreene

Yeah wouldn't want other countries to deal with us the way we've dealt with them.

12 days ago

botanical

When people ask how could IBM facilitate in the Holocaust, this is how it happens.

Google rushed to sell AI tools to Israel’s military after Hamas attack:

https://www.washingtonpost.com/technology/2025/01/21/google-...

12 days ago

yurlungur

We are in a collective mask off moment in this nation's history.

12 days ago

wayathr0w

It's surprising to me that they ever made such a pledge, considering...you know.

http://www.notechforapartheid.com/

12 days ago

ForOldHack

The closing scene of THX-1138. "Come back!" "Please!"

12 days ago

sidcool

I don't understand why this is surprising to people. Most private companies will use any proprietary technology for profits and renege on their earlier comments.

12 days ago

fallingfrog

I would say that a pledge that you only keep as long as it’s convenient doesn’t mean much. And neither does the word of the company that made it.

12 days ago

EVa5I7bHFq9mnYK

The world is at war if anyone haven't noticed, we are up against ruthless murderers, time to stop pretending weapons are optional.

11 days ago

nprateem

Everyone seems to be focusing on weapons but the real story is surveillance. AI is a wet dream for dictators.

12 days ago

zeven7

Is it a canary? Does this mean the government has imposed on Google for use of it's AI?

12 days ago

spencerflem

Imposed? They get DOD $$$ for this, theyre the ones offering

12 days ago

leoc

I’m not complaining about the headline, but it is necessary to remind people that really, “Google” never does anything and never makes any high-level decision. Those decisions are all made or at least approved by the two individuals who together have full control over Alphabet and Google, Larry Page and Sergey Brin. Object persistence is real: the fact that Page and Brin are not constantly drawing attention to themselves on social media does not mean that they somehow are no longer there.

11 days ago

askonomm

This makes it all the more nice that EU banned AI for use cases like these.

12 days ago

myth_drannon

It's interesting how AI for weapons topic immediately brought palinazis with their own agenda ( some bots I guess?). As if Israel is some sort of military AI superpower(it's not, read accounts of Oct 7th events) and the rest of armies are still using muskets and smoke signals.

12 days ago

gerdesj

Why on earth would a for profit company refuse a potential line of profit?

They already dumped "do no evil" many years ago and they are now all in on fuck the poor and fuck the rest: I'm making profits and all is fine.

Google makes money and they don't appear to care how - its all about the money.

12 days ago

janalsncm

The problem with this is that if companies are just profit maximizers then one of the things it should do is to realign the government. After all, a friendlier government can help to decrease regulation and increase incentives.

Plus, in a healthy economy if everyone is bribing the government shouldn’t it all cancel out? Well it turns out the poor don’t bribe the government very often, so they are easily ignored.

And suddenly, when the government is co-opted into believing anything that gets in the way of “business” is bad, they figure out that money that could be spent on social services could also be spent on corporate tax incentives! Eventually the entire country becomes one big profit maximizer.

12 days ago

asdfman123

Companies already are just profit maximizers and they already have done a lot to realign the government.

Of course it's not that as bad as you describe because it's not as simple as you describe.

12 days ago

danans

> The problem with this is that if companies are just profit maximizers then one of the things it should do is to realign the government

What do you think is happening right now?

12 days ago

janalsncm

What is happening is the US is reaping what it sowed 45 years ago with Reagan-era privatization and short-term thinking. It is structurally incapable of executing plans which take more than 2-4 years. So any adversary can outmaneuver the US by simply planning on longer timelines.

Oh look, China has 5 year plans.

Of course the efficiency of a one party state comes at the cost of stability: there are no internal checks on corruption. A two party state is more stable (the US has lasted 240 years) but not infinitely stable.

11 days ago

danans

But at this moment what's happening is that Capital is completing what Reagan started by finalizing the realignment of government totally toward its interests, at the expense of labor and the precariat. But sure, it all started a while back.

11 days ago

tmnvdb

Google is a company that relies to a large extent on users to trust them with their data and for advertisers to want to be associated with them. Hence they have a stronger incentive to avoid being seen as an evil corp then some other companies. This is also important for recruitment, as many engineers do not want to (be seen to) work at a privacy invading evil corp, so it is important that google creates plausible deniability for those engineers as well.

12 days ago

burningChrome

>> Why on earth would a for profit company refuse a potential line of profit?

On the one hand I think they were afraid many of their employees might protest again like they have in the past, signaling that Google isn't that awesome, progressive place everybody should work. This would mean they could be potentially losing some of the top notch SV talent that they are in constant competition with from other companies.

On the other hand, they've made it clear they aren't above firing employees who do protest as they just did when 28 employees were fired over the recent Nimbus Project contract worth an estimated $1.2B dollars with Israel:

They staged sit-in protests in Google's offices in Silicon Valley, New York City and Seattle – more than 100 protestors showed up. A day later, Google fired Montes and 27 other employees who are part of the No Tech for Apartheid group.

https://www.npr.org/2024/04/19/1245757317/google-worker-fire...

I think they try too hard to tow the line between the two, but like you said, its clear they're really all about the money.

12 days ago

gerdesj

"they're really all about the money."

When you publicly quote: You are mostly lost to reason and profit is everything! That's why you do it.

If you have other intentions then go with a Not for Profit (I'm sure most countries have a similar structure) or similar setup.

12 days ago

scarface_74

As if anyone working for an adtech company thought they were changing the world for the better.

I’m sure they are clutching their pearls while waiting for their money to be deposited into their bank account and their RSUs to be deposited into their brokerage accounts.

Yes I did a stint at BigTech. But I didn’t lie to myself and think the company I worked for was above reproach as my adult son literally peed in bottles while delivering packages for the same company.

12 days ago

bbqfog

This is the case for boycotts, so a for-profit company loses when they make immoral decisions that destroy the brand and impact the bottom line.

12 days ago

scarface_74

Yes, because this is crossing the line…

12 days ago

smeeger

AGI and AI weapon systems lead to certain annihilation of the human race regardless of who is first to implement. the only winner is the country who abstains until the very end because at least that country will perish with its dignity intact. i refuse to support AI

12 days ago

franczesko

And some still claim blocking ads on YouTube is immoral..

12 days ago

est

That's what Google promises, not Alphabet Inc.

12 days ago

486sx33

Boo, don’t be evil google! Stop it!

10 days ago

outside1234

Probably more honest this way at least

12 days ago

sanatgersappa

If there is nothing unethical about weapons or surveillance, then there should be nothing unethical about using AI for that purpose.

12 days ago

kyletns

They'll say it's for national defense against other countries, but it's only a matter of time before these weapons and surveillance tools will be deployed on American citizens. Foucault's boomerang.

12 days ago

courseofaction

Eat the rich, before they eat you.

12 days ago

m3kw9

Is a pledge till the $$$ shows up

12 days ago

ripped_britches

Doesn’t this indicate that the US DoD likely reached out to Google for a contract to develop AI for some purpose?

Otherwise why bother?

12 days ago

ein0p

Google in 2000: "Do no evil". Google in 2025: "Genocide is profitable". I wonder what Google users outside the US think about this in particular? Our "national security" could be their "national danger" after a single presidential term. Do they want to keep giving Google their money?

12 days ago

hedora

I don’t know anyone in the US that supports genocide.

The Trump voters I’ve heard seem to not understand what is currently happening with the administration. Similarly, the pro-Israel folks seem to not realize how bad the ethnic cleansing campaign is.

Anyway, the US is significantly less safe than it was last month.

11 days ago

ein0p

You don't know many Christians then. Some of the more religious folks I know are 100% in favor of sending Palestinian kids through woodchippers. As are both wings of the US government. And people who aren't in favor of that can always be brought to heel if they are in position of any authority, as illustrated by Musk's recent forced conversion to the cause, and anti-1st amendment bipartisan legislation currently sailing through Congress. "Rules based world order" is not what people think it is.

11 days ago

dartos

At least they let us know

12 days ago

torlok

If you're going to be morally bankrupt, why not just keep the pledge and lie.

12 days ago

Lance_ET_Compte

I don't believe them for a millisecond.

12 days ago

oulipo

Shameful

12 days ago

pbiggar

An important part of this discussion is how Google's AI is used as part of the genocide in Gaza. Read here for more details about the role of AI there: https://www.972mag.com/lavender-ai-israeli-army-gaza/

12 days ago

tmnvdb

That Article does not mention Google, and Google did not develop the Lavender system used by the Israeli.

12 days ago

pbiggar

I didn't say Google developed Lavender. The article describes how AI is used by Israel in the genocide. On what cloud platform does Google's military run?

12 days ago

BrenBarn

I'm shocked, shocked!

All "pledges" without some kind of enforceable legal foundation are just meaningless hot air.

12 days ago

rnd0

So they've now zoomed past 'don't be evil' right to turning into snidely freakin whiplash.

12 days ago

cyberax

Google: "Be evil".

12 days ago

smeeger

are you fucking kidding me?

12 days ago

2030ai

[dead]

12 days ago

lyzml_AF

[dead]

11 days ago

khana

[dead]

12 days ago

TheRealNGenius

[dead]

12 days ago

smileson2

[flagged]

12 days ago

dhdjruf

[flagged]

12 days ago

kjsingh

there are no profits like war profits there is nothing usual like people dying

12 days ago

uejfiweun

Dollar short and a day late. The future of the US tech industry belongs to those who weren't interested in performative woke nonsense like this during the last decade.

12 days ago

worik

The future of the tech industry belongs to China

12 days ago

janalsncm

Future? The present day tech industry belongs to China unless you narrowly define it as software or pharmaceuticals.

https://itif.org/publications/2024/09/16/china-is-rapidly-be...

For example the most advanced batteries in the world are designed and manufactured in China.

12 days ago

bbqfog

Protesting weapons manufacturing has been going on long before reactionaries started fearing the "woke" boogeyman. People protested Dupont for making napalm during the Vietnam war.

12 days ago

uejfiweun

There's a big difference between outsider activists protesting the actions of a company, and the actual leadership of a company choosing a less profitable path in order to seem more morally pure.

12 days ago

bbqfog

They're not "outsider activists", they're customers affecting the bottom line.

11 days ago

aprilthird2021

How is it woke nonsense to not want to create a weapon that probabilistically determines if a civilian looks close enough like a bad guy to missle strike them?

12 days ago

tmnvdb

This sentiment ignores the reality on the ground in favor of performative ideological purity - civilians are already getting blown up all the time by systems that do not even attempt to make any distinction between civilians and soldiers: artillery shells, mortars, landmines, rockets, etc.

12 days ago

siltcakes

The reality on the ground is that one of the very first uses of AI weapons was to target civilians in Gaza:

> Moreover, the Israeli army systematically attacked the targeted individuals while they were in their homes — usually at night while their whole families were present — rather than during the course of military activity.

https://www.972mag.com/lavender-ai-israeli-army-gaza/

12 days ago

tmnvdb

Lavender is not an autonomous weapon but if you want to seriously consider if Lavender is a good thing (I am undecided) you need to compare the effect of this operation with Lavender and the effect of doing the same operation without the Lavender system. Otherwise you run the risk of making arguments that in the end just boil down to 'weapons bad'.

12 days ago

pixl97

And indiscriminate has a cost that can slow people from using them.

Imagine you have a weapon that can find and kill all the 'bad guys'. Would you not be in a morally compromised position if you didn't use it? You're letting innocents die every moment you don't.

* warning definitions of bad guys may differ leading to further conflict.

12 days ago

tmnvdb

The logic of this argument implies we should develop weapons with maximal collateral damage to deter their usage.

12 days ago

pixl97

Which begs the question of if we can escape the Red Queen hypothesis.

Personally I don't think we can as a species.

12 days ago

aprilthird2021

> civilians are already getting blown up all the time by systems that do not even attempt to make any distinction between civilians and soldiers: artillery shells, mortars, landmines, rockets, etc.

Right, and everytime that happens because of miscalculations by our government they lose the very real and important public license to continue. Ultimately modern wars led by democracies are won by public desire to continue them. The American public can become very hesitant to wage war very fast, if we unleash Minority Report on the world for revenge

12 days ago

captainbland

Or better yet, misinterpret who the target is even supposed to be because of a hallucination.

12 days ago

worik

> Or better yet, misinterpret who the target is even supposed to be because of a hallucination.

Who, in that business, cares?

AI will provide a fig leaf for the indiscriminate large scale killing that is regularly done since the start of industrialised warfare.

Using robots spare drone pilots from PTSD

From the perspective of the murderous thugs that run our nations (way way before the current bunch of plainly bonkers ones in the USA), what is not to like?

Whilst there are all sorts of quibbles about weapons generally being evil, this is evil.

12 days ago

captainbland

AI driven drones in particular seem like ideal tools for carrying out a genocide: identify an ethnicity based off some physical characteristics, kill. No paperwork, no transport, no human conscience. Just manufacture, deploy and instruct at scale. Sure, it might get it wrong sometimes but you've got to break a few eggs...

12 days ago

uejfiweun

Because "not wanting" to do something that would make the company money due to moral considerations that aren't shared by your competitors is idiotic.

12 days ago

aprilthird2021

Ah, the Nuremberg Defense

12 days ago

uejfiweun

Lol, is that all you people can ever come up with? "I disagree, therefore you're a Nazi?" Hasn't the events of the past 6 months shown you that this dumb "gotcha" style debating doesn't work?

12 days ago

aprilthird2021

I couldn't do anything because of "moral considerations" my peers don't share. That's literally the Nuremberg Defense. Maybe pointing this out doesn't "work" but it is true. Do what you will with that, I'm not here to "work" on you or anything like that. You're a free person, as am I

9 days ago

zeroCalories

Who is that?

12 days ago

uejfiweun

Palantir, for one?

12 days ago

zeroCalories

By what measure? Google has a market cap that's 10x Palantir, and the gap in revenue/profit is even more massive. They aren't in the same league at all.

12 days ago

uejfiweun

The measure is stock returns in the last 5 years. The whole point of public companies is to generate wealth for shareholders and Google just simply isn't really delivering on the same level as Palantir.

In fact, when you look at the last decade of Google saying they're an "AI first" company and literally inventing transformers, and look at what their stock price has done and how they've performed in relation to other major companies involved in this current AI spring, there is simply no way not to be disappointed.

12 days ago

zeroCalories

Plenty of companies produce good returns, but that doesn't make them any kind of leader. FAANG still controls the market, pays the highest salaries, and produces the most research. Other rising stars like OpenAI and ByteDance are not uniquely evil either. Not saying FAANG won't fade away like IBM or Oracle, but I don't think it would be due to their unwillingness to be like Palantir.

12 days ago

righthand

Your ad dollars and marketing interests supporting the slaughter of the national enemy.

12 days ago

energy123

Many will hate to hear this but the only solution is one world government or at least a unipolar order that reduces the survival need to participate in arms races. Arms race dynamics between nations will be the end of our species.

12 days ago

asdfman123

I think having nations competing against each other is a good thing. Governments become corrupted and collapse: it would be a shame if the only world government fell into the hands of a dictator.

That being said the only way I could imagine we'd get a single world order is one country dominating everyone else, just like superpowers and regional powers dominate their respective parts of the globe.

Never ever ever are people just going to give up their control out of some form of "enlightenment" that has never existed among the human race.

12 days ago

energy123

Would you have said the same thing to people living in warring tribal societies if they hoped that local tribes would cease existing and coalesce together into a single nation state? That's bad because it reduces competition, right? But overall it was very good because tribal conflict and barriers to movement and trade act as a massive tax on anything we would both call good.

Unprecedented levels of peace in Europe happened not because of competing nation states, but in spite of that competition. It was the unipolar control exerted by the US and the destruction of the Soviet Union and the creation of the EU (a proto pan-European state) that caused the 1990s. There was one and only pole -- the West. Not 2 (or more) different adversaries with opposed interests engaging in an arms race.

As we go back to a world with more fragmented and distributed power, we will get more war and more arms races. An especially toxic setup in the age of AI.

This doesn't have to be a binary, anyways. You could set it up as some kind of federation where there's still economic competition. Just not military competition.

12 days ago

asdfman123

There is a difference between consolidation to a few different powers and consolidation to just one.

Also, AFAIK all of those nations consolidated because of military conquest. Countless European wars and empires.

The EU isn't like that, but they're an alliance and not one country. You can't just leave a country like England did.

12 days ago

_bin_

America is the only nation that currently has consolidated global power behind an even vaguely free nation.

and yes, America has done that for the "pax Americana" period. unfortunately we were short-sighted and allowed people too much free reign to be stupid and anti-American.

12 days ago

kQq9oHeAz6wLLS

Unfortunately, the only way to keep things "fair" and "equitable" so nobody revolts is to reduce everything to the lowest common denominator.

In other words, everything would be terrible, but at least it'd be terrible for everyone.

Until we realized we could sacrifice some for the betterment of the rest, find a way to rationalize it, and then we throw it all out the window.

12 days ago

osmsucks

Some country leaders do think this. But they're very particular about having that one world government named "USA" or "Russia".

12 days ago

_bin_

correct. this is why i support maintaining the American-run world order by all means we have at our disposal. it's both the best outcome for our citizens (therefore our government should pursue it) and the best outcome for the world at large. we will never accept (nor should we) the sort of one-world power that would be necessary to block defection so us running the thing is the least-bad option.

12 days ago