Google CEO says more than a quarter of the company's new code is created by AI

608 points
1/21/1970
9 days ago
by S0y

Comments


asdfman123

I work for Google, and I just got done with my work day. I was just writing I guess what you'd call "AI generated code."

But the code completion engine is basically just good at finishing the lines I'm writing. If I'm writing "function getAc..." it's smart enough to complete to "function getActionHandler()", and maybe suggest the correct arguments and a decent jsdoc comment.

So basically, it's a helpful productivity tool but it's not doing any engineering at all. It's probably about as good, maybe slightly worse, than Copilot. (I haven't used it recently though.)

8 days ago

NotAnOtter

I also work at google (until last Friday). Agree with what you said. My thoughts are

1. This quote is clearly meant to exaggerate reality, and they are likely including things like fully automated CL/PR's which have been around for a decade as "AI generated".

2. I stated before that if a team of 10 is equally as productive as a team of 8 utilizing things like copilot, it's fair to say "AI replaced 2 engineers", in my opinion. More importantly, Tech leaders would be making this claim if it were true. Copilot and it's clones have been around long enough know for the evidence to be in, and no one is stating "we've replaced X% of our workforce with AI" - therefor my claim is (by 'denying the consequent'), using copilot does not materially accelerate development.

8 days ago

ahmedfromtunis

> no one is stating "we've replaced X% of our workforce with AI"

Even if that's been happening, I don't think it would be politically savvy to admit it.

In today's social climate claiming to replace humans with AI would attract the wrong kind of attention from politicians (during an election year) and from the public in general.

This would be even more unwise to admit for a company like Google who's an "AI producer". They may leave such a language for closed meetings with potential customers during sales pitches though.

8 days ago

whywhywhywhy

> and from the public in general

Don't think the public will be that concerned about people in Google's salary bracket losing their jobs.

8 days ago

jl6

It’s a disservice to the public to assume they aren’t capable of understanding why AI job losses might be concerning even if they aren’t directly impacted. Most people aren’t so committed to class warfare that they will root for the apocalypse as long as it stomps a rich guy first.

8 days ago

wavewrangler

You mean poor person. As long as it stomps a poor person. The rich don’t have a habit of getting stomped. They direct other poor people to stomp their contemporaries. The poor don’t have a chance.

8 days ago

whatshisface

I don't think a lot of people realize how few people are "rich" in the sense of not being impacted by the labor market, or how virtually all of them are retirees. CFOs aren't looking forward to a massive shift in the labor market for accountants any more than CPAs. Warren Buffet has a "job," he writes those letters for BH and oversees the firm's investments at a high level... and most of the people who live off of investments have children in the workforce. Even most people whose children live off of their investments have kids in the (nonprofit) workforce.

8 days ago

tehjoker

Software engineers and grocery store workers are in different income brackets, but in the same class (labor/prolaterian). It is managers, executives, and investors that are in the capitalist class. Class is determined by your relationship to production.

8 days ago

barrkel

Software engineer salaries and stock compensation can be enough to shift alignment somewhat, especially after many years of capital accumulation.

8 days ago

tehjoker

if you make the majority of your earnings from passive income or you do not need to work to live you are more part of the leisure class

8 days ago

barrkel

Two things: capitalists don't not work; and if you have a sizeable portfolio, you may not need to work and may earn plenty of passive income, yet still work because you add more value at the margin working than fiddling with stock allocations or angel investing or whatnot (vs index funds etc.).

8 days ago

datavirtue

It's easy to get a capitalist to come out of retirement. Most of the time you just have to ask them to take a look at your business. Before you know it they accept a board position and shortly thereafter they are running point as President.

7 days ago

tehjoker

For an illustrated example, you can watch Succession

5 days ago

DAGdug

I’ve switched from manager to IC and vice-versa a few times at FAANG. Didn’t strike me as moving between the capitalist and proletariat classes, lol!

4 days ago

ytss

The public might though be concerned that if they are being replaced, many in other positions at other companies will soon be replaced as well.

8 days ago

darth_avocado

That’s not how the mind works. People cheered when Elon fired 80% of the Twitter staff. No one cares when people with high paying jobs suffer.

8 days ago

mmcdermott

The people who cheered about the firing of 80% of the Twitter staff largely believed (rightly or wrongly) that they were being adversely affected by them. While Google may be seen with more wariness in tech circles, I don't think the average person believes that Google is actively harming them (again, rightly or wrongly).

8 days ago

ahmedfromtunis

These aren't the same types of events. In Twitter's case, it was a one-off act, caused by one-off circumstances. With Google, it'd be more of a precursor to a new trend that might soon take root and impact me or those I care about.

8 days ago

almatabata

I think twitter is an outlier because people hated the employees already for various reasons.

For example they thought that twitter had a bloated workforce because of videos like this (https://www.youtube.com/watch?v=buF4hB5_rFs).

And a lot of people heavily disagreed with how they handled moderation. You can take things like the hunter Biden laptop suppression or in the funny category you had the getting banned for saying learn to code (https://reason.com/2019/03/11/learn-to-code-twitter-harassme...).

Take random company without controversies and you will find less vitriol about them getting fired.

8 days ago

pjmlp

No one cares about self checkout on supermarkets impact on their employees, until their employer does something similar.

8 days ago

alsetmusic

I care as a consumer who hates standing in long lines. My former bank branch had thirteen teller stations and two tellers. This wasn't on a bad day. This was for years.

5 days ago

whatshisface

People in Google salary brackets get jobs at Google-1 salary brackets, pushing junior people at Google-1 to Google-2, all the way down to IT departments at non-tech firms. This impacts everybody who's in the industry or capable of switching.

8 days ago

ahmedfromtunis

Why would the general public care about Google employees. Google is however a major saas provider. And people might start to worry that their employer is going to soon buy a subscription to whatever that that Google used to automate jobs.

8 days ago

wbl

The bank tellers didn't go away: they just became higher paid and higher skilled when cash management was no longer the job.

8 days ago

burningChrome

>> Even if that's been happening, I don't think it would be politically savvy to admit it.

When I was working in RPA (robotic process automation) about 7 years ago, we were explicitly told not to say "You can reduce your team size by having use develop an automation that handles what they're doing!"

Even back then we were told to talk about how RPA (and by proxy AI) empowers your team to focus on the really important things. Automation just reduces the friction to getting things done. Instead of doing 4 hours of mindless data input or moving folders from one place to the other, automation gives you back those four hours so your team can do something sufficiently more important and focus on the bigger picture stuff.

Some teams loved the idea. Other leaders were skeptical and never adopted it. I spent the majority of those three years trying to selling them on this idea automation was good and very little time actually coding. Its interesting seeing the paradigm shift and seeing this stuff everywhere now.

8 days ago

aleph_minus_one

> Even back then we were told to talk about how RPA (and by proxy AI) empowers your team to focus on the really important things.

As a non-politically savy person ;-) I have a feeling that this is a similarly dangerous message, since what prevents many teams to focus on really important things is often far too long meetings with managers and similar "important" stakeholders.

8 days ago

ethbr1

The reason you don't lead with headcount reduction is two-fold.

1. Almost every business has growing workload. That means reassigning good employees and not hiring new headcount, not firing existing headcount. Unipurpose, low-value offshore teams are the only ones who get cut (e.g. doing "{this} for every one of {these}" work).

2. Most operational automation is impossible to build well without deep process expertise from the SME currently performing it. If you fire that person immediately after automating their task, what do you think the next SME tells you, when you need their help?

Successfully scaling operational automation programs therefore rely on additional headcount avoidance (aka improving their volume:employee ratio) and value measurement (FTE-equivalent time savings) to justify/measure.

8 days ago

lenerdenator

> I don't think it would be politically savvy to admit it.

Would it be? Do they care?

Sam Altman's been talking about how GenAI could break capitalism (maybe not the exact quote, but something similar), and these companies have been pushing out GenAI products that could obviously and easily be used to fake photographic or video evidence of things that have occurred in the real world. Elon's obsessed with making an AI that's trained to be a 20-year-old male edgelord from the sewer pits of the internet.

Compared to those things, "we've replaced X% of our workforce with AI" is absolutely anodyne.

8 days ago

agentultra

100%.

Altman encourages anyone that will listen to him that monopolies are the only path to success in business. He has a lot riding on making sure everyone is addicted to AI and that he’s the one selling the shovels.

Google isn’t far off.

Most capitalists have this fantasy that they can reduce their labour expenses with AI and continue stock buy-backs and ever-increasing executive payouts.

What sucks is that they rely on class divisions so that people don’t feel bad when the “overpaid” software developers get replaced. Problem is that software developers are also part of the proletariat and creating these artificial class divisions is breaking up the ability to organize.

It’s not AI replacing jobs, it’s capital holders. AI is just the smoke and mirrors.

8 days ago

ahmedfromtunis

Sam's company is not a multi-trillion dollar behemoth that employs hundreds of thousands and has practical (near-)monopoly on a huge swaths of the digital economy.

8 days ago

rty32

> I don't think it would be politically savvy to admit it.

Depends on who you ask.

If Trump wins and Elon Musk actually gets a new job, they would be bragging about replacing humans with AI all day long. And corporates are going to love it.

Not sure about what voters think though. But the fact that most of these companies are in California, New York etc means that it barely matters.

8 days ago

petre

Yup, just like full self driving and ending the war in Ukraine on 24 hours.

8 days ago

sfink

I find the boast about ending the war to be reasonably likely -- if it is clear the US is switching sides in the conflict, a negotiated capitulation could happen pretty quickly.

In a similar vein, solving world hunger is closer today than it's ever been. The previous best hope was global thermonuclear war, but honestly that would leave enough survivors as to be mostly ineffective, and much more likely to have the opposite result. Severe climate change has a better shot at fully eliminating [human] hunger.

8 days ago

ulfw

Corporates will soon have to realise the hard reality that when masses of humans have been replaced there won't be masses of humans with salaries to buy said corporate's goods anymore.

8 days ago

datavirtue

AI is socialism, and it's unstoppable. People are trying to stop progress and go back to the old days. Nothing about the universe permits this.

A new economy is forming and there is nothing that can stop it without causing major, unintended fallout.

7 days ago

burningChrome

>> they would be bragging about replacing humans with AI all day long.

Has either bragged about this at all?

The only thing I've heard floated is Musk running a "government efficiency commission" which I just assumed meant he would be looking for ways to gut a lot of the never ending, never dying government programs. I've never heard him saying the commissions goal was to replace people with AI.

https://www.newsnationnow.com/politics/2024-election/trump-m...

The former president said such an audit would be to combat waste and fraud and suggested it could save trillions for the economy.

As the first order of business, Trump said that this commission will develop an action plan to eliminate fraud and improper payments within six months.

8 days ago

datavirtue

Trump and Musk will get bored quickly if elected. Once in office your power is checked.

7 days ago

tjahg

[flagged]

8 days ago

lenerdenator

That would be the way someone with no real awareness of the philosophies and realities of the two parties in the US would see it. And to be fair, that's a good description of a large chunk of the American electorate.

But you can't have a guy who literally used to relieve himself into a golden toilet take over your party and be anything but the party of big business and billionaires.

8 days ago

[deleted]
8 days ago

Thorrez

>Despite widespread rumors, there is no verified evidence that Trump actually owns a gold toilet.

https://royaltoiletry.com/does-trump-have-a-gold-toilet-unpa...

8 days ago

lenerdenator

Fair enough.

Still a guy who operated multiple luxury hotel and golf course properties that would laugh a working man out the front door if he asked for an affordable room.

8 days ago

onion2k

no one is stating "we've replaced X% of our workforce with AI"

That's only worth doing if you're trying to cut costs though. If the company has unmet ambitions there's no reason to shrink the headcount from 10 to 8 and have the same amount of output when you can keep 10 people and have the output of 12 by leveraging AI.

8 days ago

hyperpape

Almost all the big tech companies have had layoffs over the past several years. I think it’s safe to say cost cutting is very much part of their goal.

8 days ago

lupire

But the specific roles being laid off are arbitrary, and the overall goal headcount reduction is driven by macroeconomics factors (I'm being generous there), not based on new efficiencies.

Note the difft between "cost cutting" (do less, to lower cost) and "efficiency" (do same, but with less cost)

8 days ago

theptip

The goal of these cost cutting initiatives is not an absolute reduction in cost, but a relative one. They needed to show an improvement in operating margin, ie % of revenue spent on engineers.

If your engineers become 20% more efficient then your margins are better and your problem is solved. (Indeed if you have tech that can make any engineer 20% more efficient then you are back in the game of hiring as many as you can find, as long as each added engineer brings in enough additional revenue.)

8 days ago

[deleted]
8 days ago

[deleted]
8 days ago

ktnaWA

Thanks, that is how I read the announcement. The powers that be decided that there must be some quota to be fulfilled, and magically that quota was fulfilled.

AI engineers will not yet get a Nobel prize for putting everyone out of work.

8 days ago

pj_mukh

"we've replaced X% of our workforce with AI"

Most likely what is actually happening is that the X% of workforce you would lay off is being put to other projects and Google in general can take on X% more projects for the same labor $$. So there is no real reason to make that particular "replaced" statement.

8 days ago

Sparkyte

Google has to sell its AI some how. The problem is that businesses will see this and want to fire head count because they go, "Well I guess AI can do it for freeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee!". Nope no way is it writing code freely.

8 days ago

wcoenen

> including things like fully automated CL/PR's which have been around for a decade

I haven't seen this yet so I'm intrigued. Is this a commercial product, or internal tooling?

8 days ago

OkGoDoIt

I’m assuming this refers to things analogous to dependabot on GitHub where maybe it automatically updates a library version reference and runs the tests and creates a PR if everything seems good, or similarly for fixing style issues or other stuff that is pretty trivial and has good test coverage.

When you maintain an open source project on GitHub you will occasionally get some open source automated bot that submits a PR to do things like this without you even asking, and I’m sure there’s plenty more you can sign up for or implement yourself.

I wouldn’t really call it AI, but it is automated. I agree with the parent comment that a journalist trying to push an angle would probably lump it in as AI in order to make the number seem larger.

8 days ago

NotAnOtter

It's common at most mega-corps like google. For example, if a utility function in an internal library was deprecated and replaced with a different function that has the same functionality. A team might write a script which generates hundreds/thousands of PR's to make the migration to the new function.

You don't want a single PR that does that, because that would affect thousands of projects, and if something goes wrong with a single one, the whole PR needs to be rolled back.

6 days ago

nlehuen

I also work at Google and I agree with the general sentiment that AI completion is not doing engineering per se, simply because writing code is just a small part of engineering.

However in my experience the system is much more powerful than you described. Maybe this is because I'm mostly writing C++ for which there is a much bigger training corpus than JavaScript.

One thing the system is already pretty good at is writing entire short functions from a comment. The trick is not to write:

  function getAc...
But instead:

  // This function smargls the bleurgh
  // by flooming the trux.
  function getAc...
This way the completion goes much farther and the quality improves a lot. Essentially, use comments as the prompt to generate large chunks of code, instead of giving minimum context to the system, which limits it to single line completion.
8 days ago

Aachen

This type of not having to think about the implementation, especially in a language that we've by now well-established can't be written safely by humans (including by Google's own research into Android vulnerabilities if I'm not mistaken), at least with the current level of LLM, worries me the most

Time will tell whether it outputs worse, equal, or better quality than skilled humans, but I'd be very wary of anything it suggests beyond obvious boilerplate (like all the symbols needed in a for loop) or naming things (function name and comment autocompletes like the person above you described)

8 days ago

munksbeer

> worries me the most

It isn't something I worry about at all. If it doesn't work and starts creating bugs and horrible code, the best places will adjust to that and it won't be used or will be used more judiciously.

I'll still review code like I always do and prevent bad code from making it into our repo. I don't see why it's my problem to worry about. Why is it yours?

8 days ago

Aachen

Because I do security audits

Functional bugs in edge cases are annoying enough, and I seem to run into these regularly as a user, but there's yet another class of people creating edge cases for their own purposes. The nonchalant "if it doesn't work"... I don't know whether that confirms my suspicion that not all developers are aware of (as a first step; let alone control for) the risks

8 days ago

twoWhlsGud

And especially if it generates bugs in ways different from humans - human review might be less effective at catching it...

8 days ago

xp84

It generates bugs in pretty similar ways. It’s based on human-written code, after all.

Edge cases will usually be the ones to get through. Most developers don’t correctly write tests that exercise the limits of each input (or indeed have time to both unit test every function that way, and integration test to be sure the bigger stories are correctly working). Nothing about ai assist changes any of this.

(If anybody starts doing significant fully unsupervised “ai” coding they would likely pay the price in extreme instability so I’m assuming here that humans still basically read/skim PRs the same as they always have)

6 days ago

mbfg

Except that no one trusts Barney down the hall that has stack overflow open 24/7. People naturally trust AI implicitly.

8 days ago

caeril

It's worrying, yes, but we've had stackoverflow copy-paste coding for over a decade now already, which has exactly the same effects.

This isn't a new concern. Thoughtless software development started a long time ago.

8 days ago

Aachen

As a security consultant, I think I'm aware of security risks all the time, also when I'm developing code just as a hobby in spare time. I can't say that I've come across a lot of stackoverflow code that was unsafe. It happened (like unsafe SVG file upload handling advice) and I know of analyses that find it in spades, but I personally correct the few that I see (got enough stackoverflow rep to downvote, comment, or even edit without the user's approval though I'm not sure I've ever needed that) and the ones found in studies may be in less-popular answers that people don't come across as often because we should be seeing more of them otherwise, both personally and in the customer's code

So that's not to say there is nothing to be concerned about on stackoverflow, just that the risk seems manageable and understood. You also nearly always have to fit it to your own situation anyway. With the custom solutions from generative models, this is all not yet established and you're not having to customise (look at) it further if it made a plausible-looking suggestion

Perhaps this way of coding ends up introducing fewer bugs. Time will tell, but we all know how many wrong answers these things generate in text as well as what they were trained on, giving grounds for worry—while also gathering experience, of course. I'm not saying to not use it at all. It's a balance and something to be aware of

I also can't say that I find it to be thoughtless when I look for answers on stackoverflow. Perhaps as a beginning coder, you might copy bigger bits? Or without knowing what it does? That's not my current experience, though

8 days ago

miki123211

This is a good idea even outside of Google, with tools like copilot and such.

Often when I don't know exactly what function / sequence of functions I need to achieve a particular outcome, I put in a comment describing what I want to do, and Copilot does the rest. I then remove the comment once I make sure that the generated code actually works.

I find it a lot less flow-breaking than stackoverflow or even asking an LLM.

It doesn't work all of the time, and sometimes you do have to Google still, but for the cases it does work for, it's pretty nice.

8 days ago

Aachen

Why remove the comment that summarises the intent for humans? The compiler will ignore your comment anyway, so it's only there for the next human who comes along and will help them understand the code

8 days ago

miki123211

Because the code, when written, is usually obvious enough.

Something like:

  query = query.orderBy(field: "username", Ordering.DESC)
Doesn't need an explanation, but when working in a language I don't know well, I might not remember whether I'm supposed to call orderBy on the query or on the ORM module and pass query as the argument, whether the kwarg is called "field" or "column", whether it wants a string or something like `User.name` as the column expression, how to specify the ordering and so on.
7 days ago

randomdata

Like he says, the "comment" describes what he wants to do. That's not what humans are interested in. The human already knows "what he wants to do" when they read the code. It's the things like "why did he want to do this in the first place?" that is lacking in the code, and what information is available to add in a comment for the sake of humans.

Remember, LLMs are just compilers for programming languages that just so happen to have a lot of similarities with natural language. The code is not the comment. You still need to comment your code for humans.

8 days ago

JohnFen

> Like he says, the "comment" describes what he wants to do. That's not what humans are interested in.

When I'm maintaining other people's code, or my own after enough time has gone by, I'm very interested in that sort of comment. It gives me a chance to see if the code as written does what the comment says it was intended to do. It's not valuable for most of the code in a project, but is incredibly valuable for certain key parts.

You're right that comments about why things were done the way they were are the most valuable ones, but this kind of comment is in second place in my book.

8 days ago

mithametacs

Or for something that needs like a quick mathematical lemma or a worked example. A comment on what is fantastic.

8 days ago

qwertox

It's often unnecessarily verbose. If you read a comment and glance at the code that follows, you'll understand what it is supposed to do. But the comment you're giving as an instruction to an LLM usually contains information which will then be duplicated in the generated code.

8 days ago

Aachen

I see. Might still be good to have a verbose comment than no comment at all, as well as a marker of "this was generated" so (by the age of the code) you have some idea of what quality the LLM was in that year and whether to proofread it once more or not

8 days ago

lupire

External comments are API usage comments. LLM prompts are also implementation proposal.

Implementation comments belong inside the implementation, so they should be over if not deleted.

8 days ago

cryptonym

Next human will put the code in a prompt and ask what it does. Chinese Whispers.

8 days ago

Aachen

I tried making a meme some months ago with exactly this idea, but for emails. One person would tell an LLM "answer that I'm fine with either option" and sends a 5 KB email, in response to which the recipient receives it and gets the automatic summary function to tell them (in a good case) "they're happy either way" or (in a bad case) "they don't give a damn". It didn't really work, too complex for meme format as far as my abilities went, but yeah the bad translator effect is something I'm very much expecting from people who use an LLM without disclosing it

8 days ago

_heimdall

If someone is going to use an LLM to send me an email, I'd much rather them just send me the prompt directly. For the LLM message to be useful the prompt would have included all the context and details anyway, I don't need an LLM to make it longer and sound more "professional" or polite.

8 days ago

Aachen

That is actually exactly my unstated point / the awareness I was hoping to achieve by trying to make that meme :D

8 days ago

mithametacs

Not necessarily. Your prompt could include instructions to gather information from your emails and address book to tell your friend about all the relevant contacts you know in the shoe industry.

8 days ago

_heimdall

Well that sounds reasonable enough. My only request is that you send me the prompt and let me decide if I want to comply...informed consent!

7 days ago

alexxys

2 days ago

rty32

Wow, I love good, original programming jokes like these, even the ideas of the jokes. I used to browse r/ProgrammerHumor frequently, but it is too repetitive -- mostly recycled memes and there is anything new.

This is one that I really liked: https://www.reddit.com/r/ProgrammerHumor/comments/l5gg3t/thi...

8 days ago

lupire

(No need to Orientalize to defamiarize, especially when a huge fraction of the audience is Chinese, so Orientalizing doesn't defamiliarize. Game of Whispers or Telephone works fine.)

8 days ago

cryptonym

Pardon my French.

4 days ago

protomolecule

Do the Chinese call it English Whispers?

8 days ago

tessierashpool

Chinese-Americans, at least, call it a game of Telephone, like everyone else in the English-speaking world except for the actual English.

We call it “Telephone” because “Chinese Whispers” not only sounds racist, it is also super confusing. You need a lot of cultural context to understand the particular way in which Chinese whispers would be different from any other set of whispers.

8 days ago

ahoka

It’s all Greek to them.

8 days ago

jappgar

I can guarantee you there is more publicly accessible javascript in the world than C++.

Copilot will autocomplete entire functions as well, sometimes without comments or even after just typing "f". It uses your previous edits as context and can assume what you're implementing pretty well.

8 days ago

infecto

I can guarantee you that the author was referencing code within Google. That is, their tooling is trained off internal code bases. I am imagining c++ dwarfs javascript.

8 days ago

lupire

Google does not write much publicly available JavaScript. They wrote their own special flavor. (Same for any hugel legacy operation)

8 days ago

bilekas

Can we get some more info on what you're reffering to ?

8 days ago

jkaptur

They're probably talking about Closure Compiler type annotations [0], which never really took off outside Google, but (imo) were pretty great in the days before TypeScript. (Disclosure: Googler)

0. https://github.com/google/closure-compiler/wiki/Annotating-J...

8 days ago

xp84

I frequently use copilot and also find that writing comments like you do, to describe what I expect each function/class/etc to do gives superb results, and usually eliminates most of the actual coding work. Obviously it adds significant specification work but that’s not usually a bad thing.

6 days ago

[deleted]
5 days ago

cryptonym

I find writing code to be almost relaxing plus that's really a tiny fraction of dev work. Not too excited about potential productivity gains based purely on authoring snippets. I find it much more interesting on boosting maintainability, robustness and other quality metrics (not focusing on quality of AI output, actual quality of the code base).

8 days ago

michaelbuckbee

I don't work at Google, but I do something similar with my code: write comments, generate the code, and then have the AI tooling create test cases.

AI coding assistants are generally really good at ramping up a base level of tests which you can then direct to add more specific scenario's to.

8 days ago

tomhallett

Has anyone made a coding assistant which can do this based off audio which I’m saying out loud while I’m typing (interview/pairing style), so instead of typing the comment I can just say it?

8 days ago

hecanjog

I had some success using this for basic input, but never took it very far. It's meant to be customizable for that sort of thing though: https://talon.wiki/quickstart/getting_started/ (Edit: just the voice input part)

8 days ago

alickz

Comment Driven Programming might be interesting, as an offshoot of Documentation Driven Programming

8 days ago

gniv

That's pretty nice. Does it write modern C++, as I guess it's expected?

8 days ago

nlehuen

Yes it does. Internally Google uses C++20 (https://google.github.io/styleguide/cppguide.html#C++_Versio...) and the model picks the style from training, I suppose.

7 days ago

atoav

So this is basically the google CEO saying "a quarter of our terminal inputs is written by a glorified tab completion"?

8 days ago

asdfman123

Yes. Most AI hype is this bad. They have to justify the valuations.

8 days ago

remus

"tab completion good enough to write 25% of code" feels like a pretty good hit rate to me! Especially when you consider that a good chink of the other 75% is going to be the complex, detailed stuff where you probably want someone thinking about it fairly carefully.

8 days ago

rantallion

The problem being that the time spent fixing the bugs in that 25% outweighs the time saved. Now that tools like Copilot are being widely used, studies are showing that they do not in fact boost productivity. All claims to the contrary seem to be either anecdotal or marketing fluff.

https://www.techspot.com/news/104945-ai-coding-assistants-do...

8 days ago

pawelmurias

The AI tap complition is >100000% better than the coding assistants, it just saves you typing and doesn't introduce new bugs you need to fix instead of writting buggy shitty code from a text description.

8 days ago

red_admiral

As far as I know, LLMs are a genuine boost for junior developers, but still not close to what senior/principal engineers get up to.

8 days ago

makestuff

I have around 7 YOE, and I have found LLMs useful for very specific questions about syntax whenever I am working in a new language. For example, I needed to write some typescript recently and asked it how can I make a type that does X.

It is not as good with questions about API documentation for popular java libraries though and it will just hallucinate APIs/method names.

If I ask it a generic question like "how can I create a class in Java to invoke this API and store the data in this database" it is pretty useless. I'm sure I could spend more time giving it a better prompt but at that point I can just write the code myself.

Overall they are a better search engine for stackoverflow, but the LLMs are not really helping me code 30% faster or whatever the latest claim is.

8 days ago

_heimdall

It'd be interesting to know how much of Google's code is written by junior engineers. I can't imagine 25% of the code is from juniors, at which point Google's CEO is either exaggerating what he considers LLM-generated code or more than just juniors are using it.

I agree with your take though, it does seem helpful to juniors but not beyond that (yet), and this OP stat seems dubious unless juniors are doing a big portion of the work.

8 days ago

red_admiral

"rm re[TAB]" to remove a file called something like "report-accounting-Q1_2024.docx" is really helpful, especially when it adds quotes as required, but not exciting enough to get me out of bed any earlier in the morning.

I feel it's a bit like the old "measuring developer productivity in LoC" metric.

As I hinted at in another comment, in Java if you had a "private String name;" then the following:

    /**
     * Returns the name.
     * @return The name.
     */
    public String getName() {
        return this.name;
    }
and the matching setter, are easy enough to generate automatically and you don't need a LLM for it. If AI can do that part of coding a bit better, sure it's helpful in a way, but I'm not worried about my job just yet (or rather, I'm more worried about the state of the economy and other factors).
8 days ago

Maxion

For me it's really goddam satisfying having good autocomplete, especially when you are just writing boilerplate lines of code to get the code into a state where you actually get to work on the fun stuff (ther harder problems).

8 days ago

amelius

Also if your code gets sent to someone else's cloud?

8 days ago

infecto

I don't care. The vast majority of code written in the private space is garbage and not unique. Products are usually not won because of the code.

Would I send the source of a trading algo or chatgpt to a third party, probably not but those are the outliers. The code for your xyz SAAS does not matter.

I am probably an outlier in that I don't really care what corpus a LLM trains off of. Its its available in the public space, go for it.

8 days ago

mewpmewp2

Have you ever had your code repository hosted by Github, Bitbucket, Gitlab or similar?

If so, all your code is sent to cloud.

8 days ago

amelius

Answer: yes, some code. But other code I and my company like to keep private.

8 days ago

mewpmewp2

Where exactly is the repo hosted if there is one?

8 days ago

cesarb

It's common for companies to have something like self-hosted GitHub Enterprise or self-hosted GitLab hidden behind the company's VPN.

8 days ago

mewpmewp2

But where is the box where it's hosted? Is it in-house?

8 days ago

_heimdall

There are alternatives out there for self-hosted git. I have a Gitea instance running on a mini PC at home for my own projects.

8 days ago

mewpmewp2

Do you have backups of that as well? If something were to happen to your mini pc would you lose your code?

8 days ago

_heimdall

Great question, yeah I do. Right now it backs up to a separate NAS on my home network. Every once in a while I'll copy the most important directories onto a microSD card backup, but its usually going to be at least a few weeks out of date.

8 days ago

amelius

Own servers.

8 days ago

mewpmewp2

Do they manage their own servers? I wonder what proportion of companies would have in house servers managed by themselves.

8 days ago

amelius

They are colocated in a data center and you need physical keys to access the rack.

8 days ago

red_admiral

Internally hosted gitlab instances are a thing.

8 days ago

mewpmewp2

They are, but frequently the boxes where they are hosted are in AWS or similar. Or do frequently companies have actual in house servers for this purpose?

8 days ago

red_admiral

Not in house, but in a "segmented" part of the cloud that comes with service level agreements and access control and restrictions on which countries the data can be hosted in and compliance procedures etc. etc.

An extreme example of this would be the AWS GovCloud for government/military applications.

8 days ago

keybored

25% is a great win if you are prone to RSI. And for quicker feedback. But in terms of the overarching programming goal? Churning out code is a small part of it.

Code is often a liability.

8 days ago

shombaboor

It would be funny if they had a metric for how much code is completed by CTRL+V

8 days ago

unglaublich

Yes, isn't that the essential idea of industrialization and automation?

8 days ago

OtherShrezzing

I think the critique here is that the AI currently deployed at Google hasn't meaningfully automated this user's life, because most IDEs already solved "very good autocomplete" more than a decade ago.

8 days ago

tormeh

LLM autocomplete is on an entirely different level. It's not comparable to traditional autocomplete and mostly does not even compete with traditional autocomplete. LLM autocomplete will sometimes write entire blocks of code for you, with surprising skill. I often wonder how it knew what I wanted. It also generates some wrong code from time to time, but that's well worth it.

8 days ago

randomdata

> LLM autocomplete is on an entirely different level.

Which is how they've surpassed 25% in new code, as compared to the 10% (made up number, but clearly non-zero) in the past. But incremental improvement, is all.

8 days ago

busterarm

glorified, EXPENSIVE tab completion.

8 days ago

walthamstow

I assume you're referring to the compute/energy used to run the completion?

8 days ago

busterarm

to train the model

8 days ago

mmmpetrichor

Yeah, but he wants people to hear "reduce headcount by 25% if you buy our shit!"

8 days ago

mewpmewp2

How do you know that? You are creating this false sense of expectations and hype yourself.

I am going to argue contrary. If AI increases productivity 2x, it opens up as much new usecases that previously didn't seem worthy to do for its cost. So overall there will just be more work.

8 days ago

JimDabell

> I am going to argue contrary. If AI increases productivity 2x, it opens up as much new usecases that previously didn't seem worthy to do for its cost. So overall there will just be more work.

This is the entire history of the computing industry. We’ve been automating our work away for decades and it just creates more demand.

8 days ago

mewpmewp2

Yeah, this is only side projects, but I've been spending pretty much all of my free time now on side projects, largely because I feel much faster building them with LLMs and it has a compounding motivational effect. I also see so many use cases and work left to do, even with AI, the possibilities almost overwhelm me.

Well I do freelancing as well besides my usual day to day work, and that's also where direct benefits apply, and I'm getting more and more work, overwhelmingly so.

8 days ago

pawelmurias

[flagged]

8 days ago

binkHN

I wouldn't call it genius tab completion. Unfortunately, more than half of the time that the "genius" produces the code, I'm wasting my time reviewing code that is incorrect.

8 days ago

tguinot

I'm sorry but I don't understand how people say LLMs are simply "tab completion".

They allow me to do much more than that thanks to all the knowledge they contain.

For instance, yesterday I wanted to write a tool that transfers any large file that is still being appended to to multiple remote hosts, with a fast throughput.

By asking Claude for help I obtained exactly what I want in under two hours.

I'm no C/C++ expert yet I have now a functional program using libtorrent and libfuse.

By using libfuse my program creates a continuously growing list of virtual files (chunks of the big file).

A torrent is created to transfer the chunks to remote hosts.

Each chunk is added to the torrent as it appears on the file system thanks to the BEP46 mutable torrent feature in libtorrent.

On each receving host, the program rebuilds the large file by appending new chunks as soon as they are downloaded through the torrent.

Now I can transfer a 25GB file (and growing) to 15 hosts as it is being written too.

Before LLM this would have taken me at least four days as I did not know those libraries.

LLMs aren't just parrots or tab completers, they actually contain a lot of useful knowledge and they're very good at explaining it clearly.

8 days ago

qwertox

> By asking Claude for help I obtained exactly what I want in under two hours.

Did you use it in your editor or via the chat interface in the browser? Because they are two different approaches, and the one in the editor is mostly a (pretty awesome) tab completion.

When I tell an LLM to "create a script which does ..." I won't be doing this in the editor, even if copilot does have the chat interface. I'll be doing this in the browser because there I have a proper chat topic to which I can get back later, or review it.

8 days ago

tguinot

I did not use copilot or cursor. I used the Claude interface. I'm planning to setup a proper editor tool such as Cursor as I believe they got much better lately. Last time I tried was 2023 and it was kind of a pain in the butt.

8 days ago

qwertox

I tried Cursor this month but even though it is much better than copilot, it also tries to do too much. And both of them fail regularly at generating proper autocompletions, which makes Cursor a bigger annoyance because it messes up your code quite often, which copilot doesn't do. Cursor is too aggressive.

But using copilot as a better autocomplete is really helpful and well worth the subscription. Just while typing as well as giving it more precise instructions via comments.

It's like a little helper in the editor, while the ChatGPT/Claude in the browser are more like "thinking machines" which can generate really usable code.

8 days ago

tguinot

good to know, thanks

8 days ago

bitcharmer

> thanks to all the knowledge they contain

This is what's problematic with modern "AI". Most people inexperienced with it, like the parent commenter will uncritically assume these LLMs poses "knowledge". This I find the most dangerous and prevalent assumption. Most people are oblivious to the fact how bad LLMs are.

8 days ago

tguinot

I know excatly how bad the output they give is, because I ask for output that I can understand, debug and improve.

People misusing tools don't make tools useless or bad. Especially since LLMs designers never claimed the compressed information inside models is spotless or 100% accurate, or based on logical reasoning.

Any serious engineer with a modicum of knowledge about neural networks knows what can or can't be done with the output.

8 days ago

lupire

That's fine for your quick hack that is probably a reimplementation of an existing program you can't find.

But it's not a production quality implementation of new need.

8 days ago

pizzafeelsright

I am of the strong opinion most problems were solved 20-40 years ago and that most code written today is reimplementation using different languages.

I have shipped production code using LLMs in languages I did not study approved by seasoned SWE's is evidence that an acceleration is happening.

8 days ago

tguinot

It's a knowledge base that can explain the knowledge it returns when you ask, how is that not useful in a professional environment for production code?

I mean if you assume all devs are script kiddies who simply copy paste what they find on google (or ChatGPT without asking for explanations) then yeah it's never gonna be useful in a prod setting.

Also you're very wrong to believe every technical need or combination of libraries has already been implemented in open source before.

8 days ago

rty32

True, but hey, even if it's not production code, it may be an ad-hoc thing that never gets push to production, it may be code reviewed by C++ experts and improved to production quality. At very least, someone saved four days with it, and could use the time for something, maybe something they are expert at. Isn't that still good?

8 days ago

mdavid626

Most of the time saving time is just an illusion. When that code will needed to be changed, people will spend more than 4 days debugging and understanding it. The mental model of it was written by AI. It can make sense or not at all. You’ll figure it out after 4 days.

7 days ago

tguinot

The code is 2 files of 80 lines each and is very clear. There's no way any software developer needs 4 days to understand what it does.

Moreover Claude can explain the functions used very clearly (if you're too lazy to jump to definition in your editor)

LLMs are becoming actually useful to developers new to a language. Just as Google was 20 years ago.

7 days ago

mdavid626

People talk about completey different things. The article was about Google using LLM-s to generate code, not people making 80 lines with them at home. There is a huge difference. I don’t see any problem with the latter, but with the former there are many problems.

6 days ago

znpy

That sounds like a great idea, are you going to open source that?

8 days ago

tguinot

I think I will, I don't have time to maintain additional software right for other people now but I'm definitely planning on open sourcing it when I get time

8 days ago

znpy

Yeah i see your point.

However i think that you might open source the thing with a disclaimer of no maintenance. Whoever is willing to maintain it can just fork it and move along.

8 days ago

[deleted]
8 days ago

OnionBlender

Do people find these AI auto complete things helpful? I was trying the XCode one and it kept suggesting API calls that don't exist. I spent more time fixing its errors than I would have spent typing the correct API call.

8 days ago

_kidlike

I really really dislike the ones that get in your way. Like I start typing something and it injects random stuff (yes in the auto-complete colors). I have a similar feeling to when you hear your voice back in a phone: completely disabling your thought process.

In IntelliJ thankfully you can disable that part of the AI, and keep the part that you trigger it when you want something from it.

8 days ago

frereubu

> I have a similar feeling to when you hear your voice back in a phone: completely disabling your thought process.

This is a fantastic description of how it disturbs my coding practice which I hadn't been able to put into words. It's like someone is constantly interrupting you with small suggestions whether you want them or not.

8 days ago

gtirloni

This is it. I have a picture in my mind and then it puts 10 lines of code in front of me and my brain can't ignore. When I'm done reviewing that, it's already tainted my idea.

8 days ago

mu53

I find the simpler engines work better.

I want the end of the line completed with focus on context from the working code base, and I don't want an entire 5 line function completed with incomplete requirements.

It is really impressive when it implements a 5 line function correctly, but its like hitting the lottery

8 days ago

ncruces

I particularly like the part where it suggests changes to pasted code.

When I copy and paste code, very often it needs some small changes (like changing all xs to ys and at the same time widths to heights).

It's very good at this, and does the right thing the vast majority of the time.

It's also good with test code. Test code is supposed to be explicit, and not very abstracted (so someone only mildly familiar with a codebase that's looking at a failing test can at least figure the cause). This means it's full of boilerplate, and a smart code generator can help fill that in.

8 days ago

andyjohnson0

Visual Studio "intellisense" has always been pretty good for me. Seemed to make good guesses about my intentions without doing anything wild. It seemed to use ad hoc rules and patterns, but it worked and then got out of the way.

Then it got worse a couple of years ago when they tried some early-stage AI approach. I turned it off. I expect that next time I update VS it'll have got substantially worse and it will have removed the option for me to disable it.

8 days ago

nobleach

Agreed, the old Visual Basic, Visual C++, Borland Delphi, Visual C# experiences were how I dove into the deep end of several languages back in the late 90's/early 2000's. Things were VERY discoverable at that point. Obviously a deeper understanding of a language is necessary for doing real work, but noodling around just trying to get a feel for what can be done, is a great way to get started.

8 days ago

mcintyre1994

I like Cursor, it seems very good at keeping its autocomplete within my code base. If I use its chat feature and ask it to generate new code that doesn’t work super well. But it’ll almost always autocomplete the right function name as I’m typing, and then infer the correct parameters to pass in if they’re variables and if the function is in my codebase rather than a library. It’s also unsurprisingly really good at pattern recognition, so if you’re adding to an enum or something it’ll autocomplete that sensibly too.

I think it’d be more useful if it was clipboard aware though. Sometimes I’ll copy a type, then add a param of that type to a function, and it won’t have the clipboard context to suggest the param I’m trying to add.

8 days ago

qeternity

I really like Cursor but the more I use it the more frustrated I get when it ends up in a tight loop of wanting to do something that I do not want to do. There doesn’t seem to be a good way to say “do not do this thing or things like it for the next 5 minutes”.

8 days ago

M4v3R

It probably depends on the tool you use and on the programming language. I use Supermaven autocomplete when writing Typescript and it’s working great, it often feels like it’s reading my mind, suggesting what I would write next myself.

8 days ago

vbezhenar

I mostly use one-line completes and they are pretty good. Also I really like when Copilot generates boilerplate like

    if err != nil {
      return fmt.Errorf("Cannot open settings: %w", err);
    }
8 days ago

I_AM_A_SMURF

I use the one at G and it's definitely helpful. It's not revolutionary, but it makes writing code less of a headache when I kinda know what that method is called but not quite.

8 days ago

skybrian

I often delete large chunks of it unread if it doesn't do what I expected. It's much like copy and paste; deleting code doesn't take long.

8 days ago

card_zero

So your test is "seems to work"?

8 days ago

skybrian

No, what I meant is that, much like when copying code, I only keep the generated source code if it's written the way I would write it.

(By "unread" I meant that I don't look very closely before deleting if it looks weird.)

And then write tests. Or perhaps I wrote the test first.

8 days ago

card_zero

Oh, if the AI doesn't do what you expected, got it.

8 days ago

binkHN

Right now my opinion is that they're 60% unhelpful, so I largely agree with you. Sometimes I'll find the AI came up with a somewhat better way of doing something, but the vast majority of the time it does something wrong or does something that appears right, but it's actually wrong and I can only spot it with a somewhat decent code review.

8 days ago

guappa

I suspect that if you work on trivial stuff that has been asked on stackoverflow countless of times they work very nicely.

8 days ago

OnionBlender

This is what I've been noticing. For C++ and Swift, it makes pretty unhelpful suggestions. For Python, its suggestions are fine.

Swift is especially frustrating because it will hallucinate the method name and/or the argument names (since you often have to specify the argument names when calling a method).

8 days ago

guappa

Ah I've had it hallucinate non-existing methods in python rather often.

Or when I say I need to do something, it invents a library that conveniently happens to just do that thing and writes code to import and use it. Except there's no such library of course.

4 days ago

0points

No, not at all.

"classic" intellisense is reliable, so why introduce random source in the process?

8 days ago

4lb0

I use Codeium in NeoVim and yes I find it very helpful. Of course, is not 100% error free, but even when it has errors most of the time it is easier for me to fix them than to write it from scratch.

8 days ago

sharpy

Often yes. There were times when I was writing unit tests that was me just naming the test case, with 99% of the test code auto generated based on the existing code, and the name.

8 days ago

simne

Looks like model is not trained well. From my exp, after make few projects (2 looks enough), oldest XCode managed to give good suggestions in much more than 50% cases.

8 days ago

karmasimida

It is useful in our use case.

Realtime tab completion is good at some really mundane things within the current file.

You still need a chat model, like Claude 3.5 to do more explorational things.

8 days ago

DecoySalamander

I was evaluating it for a month and caught myself regularly switching to an IDE with non-AI intellisense because I wanted code that actually works.

8 days ago

mdavid626

No, not at all. It’s just the hype. It doesn’t replace engineering.

8 days ago

saagarjha

The one Xcode has is particularly bad, unfortunately.

8 days ago

myworkinisgood

Copilot is very good.

8 days ago

cryptica

This is my experience as well. LLMs are great to boost productivity, especially in the hands of senior engineers who have a deep understanding of what they're doing because they know what questions to ask, they know when it's safe to use AI-generated code and they know what issues to look for.

In the hands of a junior, AI can create a false sense of confidence and it acts as a technical debt and security flaw multiplier.

We should bring back the title "Software engineer" instead of "Software developer." Many people from other engineering professions look down on software engineers as "Not real engineers" but that's because they have the same perspective on coding as typical management types have. They think all code is equal, it's unavoidable spaghetti. They think software design and architecture doesn't matter.

The problems a software engineer faces when building a software system are the same kinds of problems that a mechanical or electrical engineer faces when building any engine or system. It's about weighing up trade-offs and making a large number of nuanced technical decisions to ultimately meet operational requirements in the most efficient, cost-effective way possible.

8 days ago

alxjrvs

In my day to day, this still remains the main way I interact with AI coding tools.

I regularly describe it as "The best snippet tool I've ever used (because it plays horseshoes)".

8 days ago

tomcam

Horseshoes? As in “close enough”?

8 days ago

ttul

Or, as in, “Ouch, man! You hit my foot!”

8 days ago

goykasi

As long as hand grenades arent introduced, I could live with that.

8 days ago

DanHulton

Honestly, I don't think "close only count in horseshoes, hand grenades, and production code" will ever catch on...

8 days ago

alxjrvs

This is why I frame it as a "snippets" plugin, rather than a Code generation tool.

I would be very confused if someone told me that they uncritically used the generated code from a snippet program with no manual input or understanding, and I feel the same with Copilot. At best, it suggests an auto-complete that I read and interpret before accepting.

The closest I come to "code generation" is during test writing, where occasionally I will let the description generate some setup, but only in tests where there are a broad number of examples to follow, and I am still going to end up re-writing a decent chunk of it based on personal example. I would not "let it write the test suite for me" and then trust the green, and I suspect that would easily fail code review (though it would be an interesting experiment...).

Obviously your comment as a good goof and well made, but it does speak to a little bit of the disconnect between what is being touted as an "AI coding tool" and how I, a person who makes react native apps to pay my rent, actually use the dang thing (i.e., "A pretty good snippets plugin"). Is My code 'AI generated'? I wouldn't call it that, but who can say definitively? We're in a fun new semantic world now.

8 days ago

davedx

I'm working on a CRM with a flexible data model, and ChatGPT has written most of the code. I don't use the IDE integrations because I find them too "low level" - I work with GPT more in a sort of "pair programming" session: I give it high level, focused tasks with bits of low level detail if necessary; I paste code back and forth; and I let it develop new features or do refactorings.

This workflow is not perfect but I am definitely building out all the core features way faster than if I wrote the code myself, and the code is in quite a good state. Quite often I do some bits of cleanup, refactorings, making sure typings are complete myself, then update ChatGPT with what the code now looks like.

I think what people miss is there are dozens of different ways to apply AI to your day-to-day as a software engineer. It also helps with thinking things through, architecture, describing best practices.

8 days ago

littlestymaar

I share your sentiment, I've written three apps where I've used language models extensively (a different one for each: ChatGPT, Mixtral and Llama-70B) and while I agree that they where immensely helpful in terms of velocity, there are a bunch of caveats:

- it only works well when you write code from scratch, context length is too short to be really helpful for working on existing codebase.

- the output code is pretty much always broken in some way, and you need to be accustomed to doing code reviews to use them effectively. If you trust the output and had to debug it later it would be a painfully slow process.

Also, I didn't really noticed a significant difference in code quality, even the best model (GPT-4) write code that doesn't work, and I find it much more efficient to use open models on Groq due to the really fast inference. Looking at ChatGPT slowly typing is really annoying (I didn't test o1 and I have no interest in doing so because of its very low throughput).

8 days ago

davedx

> context length is too short to be really helpful for working on existing codebase.

This is kind of true, my approach is I spend a fairly large amount of time copy-pasting code from relevant modules back and forth into ChatGPT so it has enough context to make the correct changes. Most changes I need to make don't need more than 2-3 modules though.

> the output code is pretty much always broken in some way, and you need to be accustomed to doing code reviews to use them effectively.

I think this really depends on what you're building. Making a CRM is a very well trodden path so I think that helps? But even when it came to asking ChatGPT to design and implement a flexible data model it did a very good job. Most of the code it's written has worked well. I'd say maybe 60-70% of the code it writes I don't have to touch at all.

The slow typing is definitely a hindrance! Sometimes when it's a big change I lose focus and alt-tab away, like I used to do when building large C++ codebases or waiting for big test suites to run. So that aspect saps productivity. Conversely though I don't want to use a faster model that might give me inferior results.

8 days ago

littlestymaar

> approach is I spend a fairly large amount of time copy-pasting code from relevant modules back and forth into ChatGPT

It can work, but what a terrible developer experience.

> I'd say maybe 60-70% of the code it writes I don't have to touch at all

I used to to write web apps so the ratio was even higher I'd say (maybe 80/90% of the code didn't need any modification) but the app itself wouldn't work at all if I didn't make those 10% changes. And you really need to read 100% of the code because you won't know upfront where those 10% will be.

> The slow typing is definitely a hindrance! Sometimes when it's a big change I lose focus and alt-tab away, like I used to do when building large C++ codebases or waiting for big test suites to run.

Yeah exactly, it's xkcd 303 but with “IA processing the response” instead of “compiling”. Having instant response was a game changer for me in terms of focus hence productivity.

> I don't want to use a faster model that might give me inferior results

As I said earlier, I didn't really feel the difference in quality so the switch was without drawbacks.

8 days ago

chrisjj

> I'd say maybe 60-70% of the code it writes I don't have to touch at all.

...yet. Bugs can take time to surface.

8 days ago

michaelteter

And this is equally true whether the code was entirely written by a human or not.

8 days ago

chrisjj

... except "not" delivers this "the output code is pretty much always broken in some way".

4 days ago

creesch

> Also, I didn't really noticed a significant difference in code quality, even the best model (GPT-4) write code that doesn't work,

Interesting, personally I have noticed a difference. Mostly in how well the models pick up small details and context. Although I do have to agree that the open Llama models are generally fairly serviceable.

Recently I have tended to lean towards Claude Sonnet 3.5 as it seems slightly better. Although that does differ per language as well.

As far as them being slow, I haven't really noticed a difference. I use them mostly through the API with open webui and the answers come quick enough.

8 days ago

mind-blight

I use o1 for research rather than coding. If I have a complex question that requires combining multiple ideas or references and checking the result, it's usually pretty good at that.

Sometimes that results in code, but it's the research and cross referencing that's actually useful with it

7 days ago

_heimdall

Its interesting to see these LLM tools turning developers into no-code customers. Where tools like visual site builders allowed those without coding experience to code a webpage, LLMs are letting those with coding experience to avoid the step of coding.

There's not even anything wrong with that, don't take my comment the wrong way. It is an interesting question of what happens at scale though. We could easily find ourselves in a spot where very few people know how to code and most producing code don't actually know how it works and couldn't find or fix a bug if they needed to. It also means LLMs would be stuck with today's code for a training set until it can invent its own coding paradigms and languages, at which point we're all left in the dust trusting it to work right.

8 days ago

sampo

> I paste code back and forth

There is this tool Aider. Takes your prompt, adds code files (sometimes not all of your code files but files it figures relevant) and prepares one long prompt, sends it to an LLM, receives the response, and makes a git commit based on the response. If you rather review git commits, it can save you the back-and-forth copy-pasting. https://aider.chat/

8 days ago

maleldil

Note that the default mode will automatically change and commit the code, which I found counter-intuitive. I prefer using the architect mode, where it first tells you what it is going to do, so you can iterate on it before making changes.

8 days ago

simplyluke

This is exactly how I’ve used copilot for over a year now. It’s really helpful! Especially with repetitive code. Certainly worth what my employer pays for it.

The general public has a very different idea of that though and I frequently meet people very surprised the entire profession hasn’t been automated yet based on headlines like this.

8 days ago

arisAlexis

Because you are using it like that doesn't mean that it can't be used for the whole stack and on its own and the public including laymen such as the Nvidia CEO and Sam think that yes, we (I'm a dev) will be replaced. Plan accordingly my friend.

8 days ago

robertlagrant

> Because you are using it like that doesn't mean that it can't be used for the whole stack

Well no, but we have no evidence it can be used for the whole stack, whatever that means.

8 days ago

arisAlexis

Even last year's gpt4 could make a whole iphone app from scratch for someone that doesn't know how to code. You can find videos online. I think you are applying the ostrich method which is understandable. We need to adapt.

8 days ago

papichulo2023

Complexity increase over time. I can create new features in minutes for my new selfhosted projects, equivalent work on my entreprise work takes days...

8 days ago

arisAlexis

New Gemini has millions of context windows. Think big and project 1-2 years

8 days ago

robertlagrant

> I think you are applying the ostrich method which is understandable

Asking for evidence is not being an ostrich.

8 days ago

arisAlexis

The ostrich method is avoiding existing evidencea available online and searchable for full stack llm programming

8 days ago

robertlagrant

Making a simple app isn't evidence that it will replace people, any more than a 90%-good self-driving car is evidence that we'll get a 100%-good self-driving car.

8 days ago

ktnaWA

Which industry would you pivot to? The only industry that is desperate for workers right now is the defense industry. But manufacturing shells for Ukraine and Israel does not seem appealing.

8 days ago

simplyluke

I was a hacker before the entire stack I work in was common or released, and I’ll be one when all our tools change again in the future. I have family who programmed with punch cards.

But I doubt the predictions from men whose net worth depends on the hype they foment.

8 days ago

arisAlexis

It's not tools. It's intelligent agents capable of human output.

8 days ago

arisAlexis

The laymen was ironic of course..

8 days ago

red_admiral

A few years ago we called that IntelliSense, right?

I remember many years ago as a Java developer, Netbeans could do such things as complete `psvm` to "public static void main() {...}", or if you had a field "private String name;" you could press some key combination and it would generate you the getter and setter, complete with javadoc which was mandatory at that place because apparently you need "Returns the name.\n @return The name." on a method called getName() in case you wondered what it was for.

8 days ago

rty32

I think most people define "Intellisense" as "IDE suggestions based on static anaysis results". Sometimes it blends a bit of heuristics/usage statistics as added feature depending on the tool. They are mostly deterministic, based on actual AST of your code, and never hallucinates. They may not be helpful but can never be wrong.

On the other hand, LLMs are completely different -- based on machine learning and everything is random and about statistics. It depends on training data and context. It is more useful but make a ton of mistakes.

8 days ago

_heimdall

Yes, Copilot and other LLM coding tools are just a (much) better version of IntelliSense.

8 days ago

snowe2010

Much worse imo.

8 days ago

_heimdall

That could be too. I don't use LLMs so I'm just giving it the benefit of the doubt based on other commentors here.

8 days ago

skydhash

Most jetbrains IDEs come with those snippets and if you’re using IDEA, the code will be 50%+ generated by the IDE.

8 days ago

peepee1982

That's what I thought. In recent weeks, most of the code I’ve written has been AI-generated. But it was mostly JSDoc comments, type checking (I'm writing JavaScript), abstracting code if I see that I'm repeating myself a little too often, etc.

All things that I would consider tedious housekeeping, but nothing that needs serious reasoning.

It's basically a glorified LSP.

8 days ago

inanepenguin

I know you're not saying anything revolutionary but this is the best succinct yet fair description of these tools that I've seen. They're not worthless but they're not job destroying.

8 days ago

peepee1982

You're right, it's not revolutionary at all. But I'm glad you liked my summary!

8 days ago

hgomersall

Before I go and rip out and replace my development workflow, is it notably better than auto complete suggestions from CoC in neovim (with say, rust-analyzer)? I'm generally pretty impressed how quickly it gives me the right function call or whatever, or it's the one of the top few.

8 days ago

Leherenn

It's more than choosing the right function call, it goes further than that. If your code has patterns, it recognises and suggests them.

For instance, one I find very useful is that we have this pattern of checking the result of a function call, logging the error and returning, or whatever. So now, every time you have `result = foo()`, it will auto suggest `if (!result) log_error...` with a generally very good error message.

Very basic, but damn convenient. The more patterns you use, the more helpful it becomes.

8 days ago

ghostpepper

Does it make you 25% more productive?

8 days ago

vundercind

Between the fraction of my time I spend actually writing code, and how much of the typing time I’m using to think anyway, I dunno how much of an increase in my overall productivity could realistically be achieved by something that just helped me type the code in faster. Probably not 25% no matter how fast it made that part. 5% is maybe possible, for something that made that part like 2-3x faster, but much more than that and it’d run up against a wall and stop speeding things up.

8 days ago

imchillyb

I imagine that those who cherished the written word thought similar thoughts when the printing press was invented, when the typewriter was invented, and before excel took over bookkeeping.

My productivity isn't so much enhanced. It's only 1%... 2%... 5%... globally, for each employee.

Have you ever dabbled with, mucked around in, a command line? Autocomplete functions there save millions of man-hour-typing-units per year. Something to think about.

A single employee, in a single task, for a single location may not equal much gained productivity, but companies now think on much larger scales than a single office location.

8 days ago

moron4hire

This is a fallacy because there is no way to add up 1% savings across 100 employees into an extra full time employee.

Work gets scheduled on short time frames. 5% savings isn't enough to change the schedule for any one person. At most, it gives me time to grab an extra coffee. I can't string together "foregone extra coffees" into "more tasks/days in the schedule".

8 days ago

robertlagrant

This. I had the same conversation years ago with someone who said "imagine if Windows booted 30s faster, all the productivity gains across the world!" And I said the same thing you did: people turn their computer on and then make a cup of tea.

Now making a kettle faster? That might actually be something.

8 days ago

rustcleaner

If 25% of code was AI-written, wouldn't it be a 33[.333...]% increase in productivity?

8 days ago

PeterStuer

It is not a direct correlation. I might write 80% of the lines of code in a week, then spend the next 6 months on the remaining 20%. If the AI was mostly helpfull in that first week, overall productivity gain would be very low.

8 days ago

vundercind

Who spends 100% of their time actually typing code?

It’s probably closer to 10% than 100%, especially at big companies.

One thing I would love to see is reports of benefits from various tools coming with one’s typing ability in WPM. I’d also like to see that on posts where people express a preference for “a quick call” or stopping by your desk rather than posting what they want in chat. I have some hypotheses I’d like to test out.

8 days ago

card_zero

Not if there was also an 8.333̅% increase in slacking off.

Wait, no. That should be based on how much slacking off Google employees do ordinarily, an unknown quantity.

8 days ago

saagarjha

You can just check Memegen traffic to figure that one out.

8 days ago

nycdatasci

This is a great anecdote. SOTA models will not provide “engineering” per se, but they will easily double productivity of a product manager that is exploring new product ideas or technologies. They are much more than intelligent auto-complete. I have done more with side projects in the last year than I did in the preceding decade.

8 days ago

llm_trw

One of my friends put it best: I just did a months worth of experimentation in two hours.

8 days ago

Sateeshm

I find this hard to believe. Can someone give me an example of something that takes months that AI can correctly do in hours?

8 days ago

jvanveen

Not hours; but days instead of months: porting around 30k lines of legacy livescript project to typescript. Most of the work is in tweaking a prompt for Claude (using Aider) so the porting process is done correctly.

8 days ago

cdchn

Thankfully it seems like AI is best at automating the most tedious and arguably most useless endeavor in software engineering- rewriting perfectly good code in whatever the language du jour is.

8 days ago

disgruntledphd2

Again, what AI is good at shows the revealed preferences of the training data, so it does make sense that it would excel at pointless rewrites.

8 days ago

protomolecule

Legacy code in a dynamically typed language is never good.

8 days ago

llm_trw

Use Undermind to gather a literature review of a field adjacent to the one you’re working in but with a wealth of information that you don’t yet know.

Use OpenAI to convert a few thousand lines of code from a language you're familiar with to one you’re not, as all the state-of-the-art tools in the field above use that language. Debug all the issues that arise from the impedance mismatch between the languages. Recreate the results from the seminal paper in the field to verify that the code works, and run it on your own problem. Write a stream-of-consciousness post without spell-checking, then throw it into GPT and ask it to fix it.

7 days ago

hnisoss

sounds to me like you're tooting your own horn.

7 days ago

karmasimida

I can totally see it.

It is actually a testament that, part of Google's code are ... kinda formulaic to some degree. Prior to the LLM take over, we already heard praise how Google's code search works wonder in helping its engineer writing code, LLM just brought that experience to next level.

8 days ago

jb1991

Long before this current AI hype cycle, we’ve had excellent code completion in editors for decades. So I guess by that definition, we’ve all been writing AI assisted code for a very long time.

8 days ago

fhd2

I'd say so, and it's a bit misleading to leave that out. Code generation is almost as old as computing. So far, most of it happened to be deterministic.

8 days ago

player1234

Yeah but it didn't cost trillions and needed its own nuclear power plant. Noone disputes that llm/ai is cool/can be helpful but at what cost, where is the roi?

7 days ago

afro88

So more or less on par with continue.dev using a local starcoder2:3b model

8 days ago

jszymborski

Sounds like the JetBrains new local AI autocomplete. If it's anything like that, it's honestly my ideal application of generative deep learning.

8 days ago

hackerknew

I wondered if this the real context. i.e. They are just referring to code-completion as AI-generated code. But, the article seems like it is referring to more than that?

8 days ago

awkward

Stuff that works well with AI seems to correlate pretty well with high churn changes. I've had good luck using AI to port large numbers of features from version A to version B, or getting code with a a lot of dependencies under mocked unit tests.

It's easy to see that adding up quickly to represent large percentages of the codebase by line, but it's not feature development or solving hard problems.

8 days ago

blindhippo

Same things I use it for as well - crap like "update this class to use JDK21" or "re-implement this client to use AWS SDKv2" or whatever.

And it works maybe... 80% of the way and I spend all my time fixing the remaining 20%. Anecdotally I don't "feel" like this really accelerates me or reduces the time it would take me to do the change if I just implemented the translation manually.

8 days ago

awkward

Amazon is publicly claiming that they have saved hundreds of millions on jvm upgrades using AI, so while it feels trivial - because before that work would end up in the "just don't do it" pile - it's a relevant use case.

8 days ago

theodric

I wonder how this works with IP rights in the USA. Like, is `function getAc` eligible for copyright protection, but `tionHandler()` isn't? After all, [1]

[1] https://www.reuters.com/legal/ai-generated-art-cannot-receiv...

8 days ago

bambax

Thank you for this comment. So the code written in this manner isn't really "created by AI"; AI is just a nice additional feature of an editor.

I wonder if the enormous hype around AI is a good or bad thing; it's obviously both but will the good win out the bad, or will the disappointment eventually be so overwhelming as to extinguish any enthusiasm.

8 days ago

segasaturn

How do you square this comment with the one right below it[1], which explicitly confirms the statement that Google is using GenAI via Gemini to write code? Lots of mixed signals coming from the Googlers here.

1: https://news.ycombinator.com/item?id=41992028

8 days ago

prismatic-david

This is pretty much what I've found with Copilot as well. It's like a slightly smarter autocomplete in most cases. Copilot tends toward being a little eager sometimes, but it's easy enough to just ignore the suggestions when it starts going down a weird path.

8 days ago

ImaCake

This autocomplete seems about on par with github copilot. Do you also get options for prompting it on specific chunks of code and performing specific actions such as writing docs or editing existing code? All things that come standard with gh copilot now.

8 days ago

grecy

I'm confused, I've been doing similar tab completion for function names in eclipse since about 2003...

8 days ago

aforty

We have this at our company too. I guess it’s useful but doesn’t really have a whole lot of time.

8 days ago

markstos

Which editor is Google's AI code completion integrated with? VS Code?

8 days ago

hoveringhen

Yeah

8 days ago

insane_dreamer

also useful for writing unit tests, comments, descriptions, so if you count all of that as code, together with boilerplate stuff, then yeah, it could add up to 25%.

8 days ago

znpy

> If I'm writing "function getAc..." it's smart enough to complete to "function getActionHandler()", and maybe suggest the correct arguments and a decent jsdoc comment.

I really mean no offense, but your example doesn't sound much different from what old IDEs (say, Netbeans) used to do 15 years ago.

I could design a Swing ui and it would generate the code and if I wanted to override a method it would generate a decent boilerplate boilerplate (a getter, like in your example) along with usual comments and definitely correct parameters list (with correct types).

Is this "AI Code" thing something that appears new because at some point we abandoned IDEs with very strong intellisense (etc) ?

8 days ago

hoveringhen

This video is a pretty good one on how it works in practice: https://storage.googleapis.com/gweb-research2023-media/media...

8 days ago

heresie-dabord

"Our overhyped Autocomplete Implementation (A.I.) is completing 25% of our lines of code so well that we need to fund nuclear reactors to power the server farms."

8 days ago

josh_carterPDX

My first reaction to the title was, "That explains why things are broken." but this explanation makes so much sense. Thanks for clarifying.

But yeah, I wish the new version of Chrome worked better. ¯\_(ツ)_/¯

8 days ago

napierzaza

[dead]

8 days ago

Galatians4_16

Kerry said hi

8 days ago

ntulpule

Hi, I lead the teams responsible for our internal developer tools, including AI features. We work very closely with Google DeepMind to adapt Gemini models for Google-scale coding and other Software Engineering usecases. Google has a unique, massive monorepo which poses a lot of fun challenges when it comes to deploying AI capabilities at scale.

1. We take a lot of care to make sure the AI recommendations are safe and have a high quality bar (regular monitoring, code provenance tracking, adversarial testing, and more).

2. We also do regular A/B tests and randomized control trials to ensure these features are improving SWE productivity and throughput.

3. We see similar efficiencies across all programming languages and frameworks used internally at Google and engineers across all tenure and experience cohorts show similar gain in productivity.

You can read more on our approach here:

https://research.google/blog/ai-in-software-engineering-at-g...

9 days ago

hitradostava

I'm continually surprised by the amount of negativity that accompanies these sort of statements. The direction of travel is very clear - LLM based systems will be writing more and more code at all companies.

I don't think this is a bad thing - if this can be accompanied by an increase in software quality, which is possible. Right now its very hit and miss and everyone has examples of LLMs producing buggy or ridiculous code. But once the tooling improves to:

1. align produced code better to existing patterns and architecture 2. fix the feedback loop - with TDD, other LLM agents reviewing code, feeding in compile errors, letting other LLM agents interact with the produced code, etc.

Then we will definitely start seeing more and more code produced by LLMs. Don't look at the state of the art not, look at the direction of travel.

9 days ago

latexr

> if this can be accompanied by an increase in software quality

That’s a huge “if”, and by your own admission not what’s happening now.

> other LLM agents reviewing code, feeding in compile errors, letting other LLM agents interact with the produced code, etc.

What a stupid future. Machines which make errors being “corrected” by machines which make errors in a death spiral. An unbelievable waste of figurative and literal energy.

> Then we will definitely start seeing more and more code produced by LLMs.

We’re already there. And there’s a lot of bad code being pumped out. Which will in turn be fed back to the LLMs.

> Don't look at the state of the art not, look at the direction of travel.

That’s what leads to the eternal “in five years” which eventually sinks everyone’s trust.

9 days ago

danielmarkbruce

> What a stupid future. Machines which make errors being “corrected” by machines which make errors in a death spiral. An unbelievable waste of figurative and literal energy.

Humans are machines which make errors. Somehow, we got to the moon. The suggestion that errors just mindlessly compound and that there is no way around it, is what's stupid.

9 days ago

latexr

> Humans are machines

Even if we accept the premise (seeing humans as machines is literally dehumanising and a favourite argument of those who exploit them), not all machines are created equal. Would you use a bicycle to fill your taxes?

> Somehow, we got to the moon

Quite hand wavey. We didn’t get to the Moon by reading a bunch of text from the era then probabilistically joining word fragments, passing that around the same funnel a bunch of times, then blindly doing what came out, that’s for sure.

> The suggestion that errors just mindlessly compound and that there is no way around it

Is one that you made up, as that was not my argument.

8 days ago

danielmarkbruce

LLMs are a lot better at a lot of things than a lot of humans.

We got to the moon using a large number of systems to a) avoid errors where possible and b) build in redundancies. Even an LLM knows this and knew what the statement meant:

https://chatgpt.com/share/6722e04f-0230-8002-8345-5d2eba2e7d...

Putting "corrected" in quotes and saying "death spiral" implies error compounding.

https://chatgpt.com/share/6722e19c-7f44-8002-8614-a560620b37...

These LLMs seem so smart.

8 days ago

philipwhiuk

> LLMs are a lot better at a lot of things than a lot of humans.

Sure, I'm really poor painter, Midjourney is better than me. Are they better than a human trained for that task, on that task? That's the real question.

And I reckon the answer is currently no.

8 days ago

danielmarkbruce

The real question is can they do a good enough job quickly and cheaply to be valuable. ie, quick and cheap at some level of quality is often "better". Many people are using them in the real world because they can do in 1 minute what might take them hours. I personally save a couple hours a day using ChatGPT.

8 days ago

latexr

Ah, well then, if the LLM said so then it’s surely right. Because as we all know, LLMs are never ever wrong and they can read minds over the internet. If it says something about a human, then surely you can trust it.

You’ve just proven my point. My issue with LLMs is precisely people turning off their brains and blindly taking them at face value, even arduously defending the answers in the face of contrary evidence.

If you’re basing your arguments on those answers then we don’t need to have this conversation. I have access to LLMs like everyone else, I don’t need to come to HN to speak with a robot.

8 days ago

danielmarkbruce

You didn't read the responses from an LLM. You've turned your brain off. You probably think self-driving cars are also a nonsense idea. Can't work. Too complex. Humans are geniuses without equal. AI is all snake oil. None of it works.

8 days ago

latexr

You missed the mark entirely. But it does reveal how you latch on to an idea about someone and don’t let it go, completely letting it cloud your judgement and arguments. You are not engaging with the conversation at hand, you’re attacking a straw man you have constructed in your head.

Of course self-driving cars aren’t a nonsense idea. The execution and continued missed promises suck, but that doesn’t affect the idea. Claiming “humans are geniuses without equal” would be pretty dumb too, and is again something you’re making up. And something doesn’t have to be “all snake oil” to deserve specific criticism.

The world has nuance, learn to see it. It’s not all black and white and I’m not your enemy.

8 days ago

danielmarkbruce

Nope, hit the mark.

Actually understand LLMs in detail and you'll see it isn't some huge waste of time and energy to have LLMs correct outputs from LLMs.

Or, don't, and continue making silly, snarky comments about how stupid some sensible thing is, in a field you don't understand.

8 days ago

malcolmgreaves

> These LLMs seem so smart.

Yes, they do *seem* smart. My experience with a wide variety of LLM-based tools is that they are the industrialization of the Dunning-Kruger effect.

8 days ago

danielmarkbruce

It's more likely the opposite. Humans rationalize their errors out the wazoo. LLMs are showing us we really aren't very smart at all.

8 days ago

Johanx64

Humans are obviously machines. If not, what are humans then? Fairies?

Now once you've recognized that, you're better equiped for task at hand - which is augmenting and ultimately automating away every task that humans-as-machines perform by building equivalent or better machine that performs said tasks at fraction of the cost!

People who want to exploit humans are the ones that oppose automation.

There's still long way to go, but now we've finally reached a point where some tasks that were very ellusive to automation are starting to show great promise of being automated, or atleast being greatly augmented.

8 days ago

beepbooptheory

Profoundly spiritual take. Why is that the task at hand?

The conceit that humans are machines carries with it such powerful ideology: humans are for something, we are some kind of utility, not just things in themselves, like birds and rocks. How is it anything other than an affirmation of metaphysical/theological purpose to particularly humans? Why is it like that? This must be coming from a religious context, right?

I cannot at least see how you could believe this while sustaining a rational, scientific mind about nature, cosmology, etc. Which is fine! We can all believe things, just know you cant have your cake and eat it too. Namely, if anybody should believe in fairies around here, it should probably be you!

8 days ago

danielmarkbruce

> Why is that the task at hand?

Because it's boring stuff, and most of us would prefer to be playing golf/tennis/hanging out with friends/painting/etc. If you look at the history of humanity, we've been automating the boring stuff since the start. We don't automate the stuff we like.

8 days ago

Johanx64

Where's the spiritual part?

Recognizing that humans, just like birds are self-replicating biological machines is the most level-headed way of looking at it.

It is consistent with observations and there are no (apparent) contraditions.

The spritual beliefs are the ones with the fairies, binding of the soul, made of special substrate, beyond reason and understanding.

If you have desire to improve human condition (not everyone does) then the task at hand naturally arisies - eliminate forced labour, aging, disease, suffering, death, etc.

This all naturally leads to automation and transhumanism.

7 days ago

lelanthran

> Humans are obviously machines. If not, what are humans then? Fairies?

If humans are machines, then so are fairies.

8 days ago

kelnos

The difference is that when we humans learn from our errors, we learn how to make them less often.

LLMs get their errors fed back into them and become more confident that their wrong code is right.

I'm not saying that's completely unsolvable, but that does seem to be how it works today.

8 days ago

danielmarkbruce

That isn't the way they work today. LLMs can easily find errors in outputs they themselves just produced.

Start adding different prompts, different models and you get all kinds of ways to catch errors. Just like humans.

8 days ago

Lio

I don’t think LLMs can easily find errors in their output.

There was a recent meme about asking LLMs to draw a wineglass full to the brim with wine.

Most really struggle with that instruction. No matter how much you ask them to correct themselves they can’t.

I’m sure they’ll get better with more input but what it reveals is that right now they definitely do not understand their own output.

I’ve seen no evidence that they are better with code than they are with images.

For instance, if the time to complete only scales with length of the token and not the complexity of its contents then it probably safe to assume it’s not being comprehended.

8 days ago

philipwhiuk

> LLMs can easily find errors in outputs they themselves just produced.

No. LLMs can be told that there was an error and produce an alternative answer.

In fact LLMs can be told there was an error when there wasn't one and produce an alternative answer.

8 days ago

danielmarkbruce

8 days ago

mavidser

https://chatgpt.com/share/672331d2-676c-8002-b8b3-10fc4c8d88...

In my experience, if you confuse an LLM by deviating from the the "expected", then all the shims of logic seem to disappear, and it goes into hallucination mode.

8 days ago

danielmarkbruce

Try asking this question to a bunch of adults.

8 days ago

mavidser

Tbf that was exactly my point. An adult might use 'inference' and 'reasoning' to ask clarification, or go with an internal logic of their choosing.

ChatGPT here went with a lexigraphical order in Python for some reason, and then proceeded to make false statements from false observations, while also defying its own internal logic.

    "six" > "ten" is true because "six" comes after "ten" alphabetically.
No.

    "ten" > "seven" is false because "ten" comes before "seven" alphabetically.
No.

From what I understand of LLMs (which - I admit - is not very much), logical reasoning isn't a property of LLMs, unlike information retrieval. I'm sure this problem can be solved at some point, but a good solution would need development of many more kinds of inference and logic engines than there are today.

2 days ago

cdchn

Do you believe that the LLM understands what it is saying and is applying the logic that you interprets from its response, or do you think its simply repeating similar patterns of words its seen associated with the question you presented it?

8 days ago

danielmarkbruce

If you take the time to build an (S?)LM yourself, you'll realize it's neither of these. "Understands" is an ill-defined term, as is "applying logic".

But a LLM is not "simply" doing anything. It's extremely complex and sophisticated. Once you go from tokens into high-dimensional embeddings... it seems these models (with enough training) figure out how all the concepts go together. I'd suggest reading the word2vec paper first, then think about how attention works. You'll come to the conclusion these things are likely to be able to beat humans at almost everything.

8 days ago

lomase

You said humans are machines that make errors ans that LLMs can easily find errors in output they themself produce.

Are you sure you wanted to say that? Or is the other way around?

8 days ago

danielmarkbruce

Yes. Just like humans. It's called "checking your work" and we teach it to children. It's effective.

8 days ago

0points

> LLMs can easily find errors in outputs they themselves just produced.

Really? That must be a very recent development, because so far this has been a reason for not using them at scale. And noone is.

Do you have a source?

8 days ago

danielmarkbruce

Lots of companies are using them at scale.

8 days ago

reverius42

To err is human. To err at scale is AI.

9 days ago

cetu86

I fear that we'll see a lot of humans err at scale next Tuesday. Global warming is another example of human error at scale.

8 days ago

fuzztester

>next Tuesday.

USA (s)election, I guess.

8 days ago

danielmarkbruce

To err at scale isn't unique to AI. We don't say "no software, it can err at scale".

8 days ago

munk-a

CEOs embracing the marginal gains of LLMs by dumping billions into it are certainly great examples of humans erring at scale.

8 days ago

fuzztester

yep, nano mega.

8 days ago

trod123

It is by will alone that I set my mind in motion.

It is by the juice of Sapho that thoughts acquire speed, the lips become stained, the stains become a warning...

8 days ago

fuzztester

err, "hallucinate" is the euphemism you're looking for. ;)

8 days ago

arkh

I don't like the use of hallucinate. It implies that LLM have some kind of model of reality and some times get confused. They don't have any kind of model of anything, they cannot "hallucinate", they can only output wrong results.

8 days ago

fuzztester

>They don't have any kind of model of anything, they cannot "hallucinate", they can only output wrong results.

it's even more fundamental than that.

even if they had any model, they would not be able to think.

thinking requires consciousness. only humans and some animals have it. maybe plants too.

machines? no way, jose.

8 days ago

fuzztester

yeah, i get you. it was a joke, though.

that "hallucinate" term is a marketing gimmick to make it seem to the gullible that this "AI" (i.e. LLMs) can actually think, which is flat out BS.

as many others have said here on hn, those who stand to benefit a lot from this are the ones promoting this bullcrap idea (that they (LLMs) are intelligent).

greater fool theory.

picks and shovels.

etc.

In detective or murder novels, the cliche is "look for the woman".

https://en.m.wikipedia.org/wiki/Cherchez_la_femme

in this case, "follow the money" is the translation, i.e. who really benefits (the investors and founders, the few), as opposed to who is grandly proclaimed to be the beneficiary (us, the many).

8 days ago

fuzztester

s/grand/grandiose/g

from a search for grand vs grandiose:

When it comes to bigness, there's grand and then there's grandiose. Both words can be used to describe something impressive in size, scope, or effect, but while grand may lend its noun a bit of dignity (i.e., “we had a grand time”), grandiose often implies a whiff of pretension.

https://www.merriam-webster.com/dictionary/grandiose

7 days ago

openrisk

> Humans are machines which make errors.

Indeed, and one of the most interesting errors some human machines are making is hallucinating false analogies.

8 days ago

danielmarkbruce

It wasn't an analogy.

8 days ago

[deleted]
8 days ago

goatlover

Machines are intelligently designed for a purpose. Humans are born and grow up, have social lives, a moral status and are conscious, and are ultimately the product of a long line of mindless evolution that has no goals. Biology is not design. It's way messier.

8 days ago

nuancebydefault

Exactly my thought. Humans can correct humans. Machines can correct, or at least point to failures in the product of, machines.

9 days ago

[deleted]
8 days ago

paradox242

I don't see how this is sustainable. We have essentially eaten the seed corn. These current LLMs have been trained by an enormous corpus of mostly human-generated technical knowledge from sources which we already know to be currently being polluted by AI-generated slop. We also have preliminary research into how poorly these models do when training on data generated by other LLMs. Sure, it can coast off of that initial training set for maybe 5 or more years, but where will the next giant set of unpolluted training data come from? I just don't see it, unless we get something better than LLMs which is closer to AGI or an entire industry is created to explicitly create curated training data to be fed to future models.

9 days ago

_DeadFred_

These tools also require the developer class to that they are intended to replace to continue to do what they currently do (create the knowledge source to train the AI on). It's not like the AIs are going to be creating the accessible knowledge bases to train AIs on, especially for new language extensions/libraries/etc. This is a one and f'd development. It will give a one time gain and then companies will be shocked when it falls apart and there's no developers trained up (because they all had to switch careers) to replace them. Unless Google's expectation is that all languages/development/libraries will just be static going forward.

9 days ago

layer8

One of my concerns is that AI may actually slow innovation in software development (tooling, languages, protocols, frameworks and libraries), because the opportunity cost of adopting them will increase, if AI remains unable to be taught new knowledge quickly.

8 days ago

mathw

It also bugs me that these tools will reduce the incentive to write better frameworks and language features if all the horrible boilerplate is just written by an LLM for us rather than finding ways to design systems which don't need it.

The idea that our current languages might be as far as we get is absolutely demoralising. I don't want a tool to help me write pointless boilerplate in a bad language, I want a better language.

8 days ago

batty_alex

This is my main concern. What's the point of other tools when none of the LLMs have been trained on it and you need to deliver yesterday?

It's an insanely conservative tool

8 days ago

jamil7

You already see this if you use a language outside of Python, JS or SQL.

8 days ago

wahnfrieden

that is solved via larger contexts

8 days ago

layer8

It’s not, unless contexts get as large as comparable training materials. And you’d have to compile adequate materials. Clearly, just adding some documentation about $tool will not have the same effect as adding all the gigabytes of internet discussion and open source code regarding $tool that the model would otherwise have been trained on. This is similar to handing someone documentation and immediately asking questions about the tool, compared to asking someone who had years of experience with the tool.

Lastly, it’s also a huge waste of energy to feed the same information over and over again for each query.

8 days ago

wahnfrieden

- context of millions of tokens is frontier

- context over training is like someone referencing docs vs vaguely recalling from decayed memory

- context caching

8 days ago

layer8

You’re assuming that everything can be easily known from documentation. That’s far from the truth. A lot of what LLMs produce is informed by having been trained on large amounts of source code and large amounts of discussions where people have shared their knowledge from experience, which you can’t get from the documentation.

8 days ago

0points

Yea, I'm thinking along the same lines.

The companies valuing the expensive talent currently working on Google will be the winner.

Google and others are betting big right now, but I feel the winner might be those who watches how it unfolds first.

8 days ago

brainwad

The LLM codegen at Google isn't unsupervised. It's integrated into the IDE as both autocomplete and prompt-based assistant, so you get a lot of feedback from a) what suggestions the human accepts and b) how they fix the suggestion when it's not perfect. So future iterations of the model won't be trained on LLM output, but on a mixture of human written code and human-corrected LLM output.

As a dev, I like it. It speeds up writing easy but tedious code. It's just a bit smarter version of the refactoring tools already common in IDEs...

9 days ago

kelnos

What about (c) the human doesn't realize the LLM-generated code is flawed, and accepts it?

8 days ago

monocasa

I mean what happens when a human doesn't realize the human generated code is wrong and accepts the PR and it becomes part of the corpus of 'safe' code?

8 days ago

jaredsohn

Presumably someone will notice the bug in both of these scenarios at some point and it will no longer be treated as safe.

8 days ago

skydhash

Do you ask a junior to review your code or someone experienced in the codebase?

8 days ago

loki-ai

maybe most of the code in the future will be very different from what we’re used to. For instance, AI image processing/computer vision algorithms are being adopted very quickly given the best ones are now mostly transformers networks.

8 days ago

spockz

My main gripe with this form of code generation is that is primarily used to generate “leaf” code. Code that will not be further adjusted or refactored into the right abstractions.

It is now very easy to sprinkle in regexes to validate user input , like email addresses, on every controller instead of using a central lib/utility for that.

In the hands of a skilled engineer it is a good tool. But for the rest it mainly serves to output more garbage at a higher rate.

8 days ago

cdchn

>It is now very easy to sprinkle in regexes to validate user input , like email addresses, on every controller instead of using a central lib/utility for that.

Some people are touting this as a major feature. "I don't have to pull in some dependency for a minor function - I can just have AI write that simple function for me." I, personally, don't see this as a net positive.

8 days ago

spockz

Yes, I have heard similar arguments before. It could be an argument for including the functionality in the standard lib for the language. There can be a long debate about dependencies, and then there is still the benefit of being able to vendor and prune them.

The way it is now just leads to bloat and cruft.

8 days ago

philipwhiuk

> The direction of travel is very clear

And if we get 9 women we can produce a baby in a single month.

There's no guarantee such progression will continue. Indeed, there's much more evidence it is coming to a a halt.

8 days ago

Towaway69

It might also be an example of 80/20 - we're just entering the 20% of features that take 80% of the time & effort.

It might be possible but will shareholders/investors foot the bill for the 80% that they still have to pay.

8 days ago

farseer

Its not even been 2 years, and you think things are coming to a halt?

8 days ago

0points

Yes. The models require training data and they already been fed the internet.

More and more of the content generated since is LLM generated and useless as training data.

The models get worse, not better by being fed their own output, and right now they are out of training data.

This is why Reddit just went profitable, AI companies buy their text to train their models because it is at least somewhat human written.

Of course, even reddit is crawling with LLM generated text, so yes. It is coming to a halt.

8 days ago

CaptainFever

Data is not the only factor. Architecture improvements, data filtering etc. matter too.

8 days ago

simianparrot

I know for a fact they are because rate _and_ quality of improvement is diminishing exponentially. I keep a close eye on this field as part of my job.

8 days ago

lelanthran

> Don't look at the state of the art not, look at the direction of travel.

That's what people are doing. The direction of travel over the most recent few (6-12) months is mostly flat.

The direction of travel when first introduced was a very steep line going from bottom-left to top-right.

We are not there anymore.

8 days ago

olalonde

> I'm continually surprised by the amount of negativity

Maybe I'm just old, but to me, LLMs feel like magic. A decade ago, anyone predicting their future capabilities would have been laughed at.

8 days ago

Towaway69

Magic Makes Money - the more magical something seems, the more people are willing to pay for that something.

The discussion here seems to bare this out: CEO claims AI is magical, here the truth becomes that it’s just an auto-complete engine.

8 days ago

guappa

Nah, you just were not up to speed with the current research. Which is completely normal. Now marketing departments are on the job.

8 days ago

davedx

Transformers were proposed in 2017. A decade ago none of this was predictable.

8 days ago

guappa

emacs psichologist was there from before :D

And so were a lot of markov chain based chatbots. Also Doretta, the microsoft AI/search engine chatbot.

Were they as good? No. Is this an iteration of those? Absolutely.

7 days ago

protomolecule

Kurzweil would disagree)

8 days ago

mmmpetrichor

That's the hype isn't it. The direction of travel hasn't been proven to be more than a surface level yet.

8 days ago

randomNumber7

Because there seems to be a fundamental misunderstanding producing a lot of nonsense.

Of course LLMs are a fantastic tool to improve productivity, but current LLM's cannot produce anything novel. They can only reproduce what they have seen.

8 days ago

visarga

But they assist developers and collect novel coding experience from their projects all the time. Each application of LLM creates feedback to the AI code - the human might leave it as is, slightly change it, or refuse it.

8 days ago

0points

> LLM based systems will be writing more and more code at all companies.

At Google, today, for sure.

I do believe we still are not across the road on this one.

> if this can be accompanied by an increase in software quality, which is possible. Right now its very hit and miss

So, is it really a smart move of Google to enforce this today, before quality have increased? Or did this set off their path to losing market shares because their software quality will deteriorate further over the next couple years?

From the outside it just seems Google and others have no choice, they must walk this path or lose market valuation.

8 days ago

dogleash

> I'm continually surprised by the amount of negativity that accompanies these sort of statements.

I'm excited about the possibilities and I still recoil at the refined marketer prose.

8 days ago

fallingknife

I'm not really seeing this direction of travel. I hear a lot of claims, but they are always 3rd person. I don't know or work with any engineers who rely heavily on these tools for productivity. I don't even see any convincing videos on Youtube. Just show me on engineer sitting down with theses tools for a couple hours and writing a feature that would normally take a couple of days. I'll believe it when I see it.

8 days ago

Roark66

Well, I rely on it a lot, but not in the IDE, I copy/paste my code and prompts between the ide and LLM. By now I have a library of prompts in each project I can tweak that I can just reuse. It makes me 25% up to 50% faster. Does this mean every project t is done in 50/75% of the time? No, the actual completion time is maybe 10% faster, but i do get a lot more time to spend on thinking about the overall design instead of writing boilerplate and reading reference documents.

Why no youtube videos thought? Well, most dev you tubers are actual devs that cultivate an image of "I'm faster than LLM, I never re-read library references, I memorise them on first read" and do on. If they then show you a video how they forgot the syntax for this or that maven plugin config and how LLM fills it in 10s instead of a 5min Google search that makes them look less capable on their own. Why would they do that?

8 days ago

skydhash

Why don’t you read reference documents? The thing with bite-sized information is that is never gives you a coherent global view of the space. It’s like exploring a territory by crawling instead of using a map.

8 days ago

fallingknife

Can you give me an example of one of these useful prompts? I'd love to try it out.

8 days ago

fuzztester

you said it, bro.

8 days ago

baxtr

I think that at least partially the negativity is due to the tech bros hyping AI just like they hyped crypto.

8 days ago

reverius42

To me the most interesting part of this is the claim that you can accurately and meaningfully measure software engineering productivity.

9 days ago

ozim

You can - but not on the level of a single developer and you cannot use those measures to manage productivity of a specific dev.

For teams you can measure meaningful outcomes and improve team metrics.

You shouldn’t really compare teams but it also is possible if you know what teams are doing.

If you are some disconnected manager that thinks he can make decisions or improvements reducing things to single numbers - yeah that’s not possible.

9 days ago

deely3

> For teams you can measure meaningful outcomes and improve team metrics.

How? Which metrics?

9 days ago

anthonyskipper

My company uses the Dora metrics to measure the productivity of teams and those metrics are incredibly good.

9 days ago

Capricorn2481

These are awesome, but feel more applicable to DevOps than anything else. Development can certainly affect these metrics, but assuming your code doesn't introduce a huge bug that crashes the server, this is mostly for people deploying apps.

I think it's harder to measure things like developer productivity. The closest thing we have is making an estimate and seeing how far off you are, but that doesn't account for hedging estimates or requirements suddenly changing. Changing requirements doesn't matter for DORA as it's just another sample to test for deployment.

8 days ago

neaanopri

There's only one metric that matters at the end of the day, and that's $. Revenue.

Unfortunately there's a lot of lag

8 days ago

ImaCake

> Unfortunately there's a lot of lag

A great generalisation and understatement! Often looking like you are becoming more efficient is more important than actually being more efficient, e.g you need to impress investors. So you cut back on maintenance and other cost centres and the new management can blame you in 6 years time for it when you are far enough away from it to not hurt you.

8 days ago

fuzztester

s/Revenue/profit/g

8 days ago

ozim

That is what we pay managers -to figure out- for. They should find out which and how by knowing the team, familiarity with domain knowledge, understanding company dynamics, understanding customer, understanding market dynamics.

9 days ago

seanmcdirmid

That's basically a non-answer. Measuring "productivity" is a well known hard problem, and managers haven't really figured it out...

9 days ago

mdorazio

It's not a non-answer. Good managers need to figure out what metrics make sense for the team they are managing, and that will change depending on the company and team. It might be new features, bug fixes, new product launch milestones, customer satisfaction, ad revenue, or any of a hundred other things.

9 days ago

seanmcdirmid

I would want a specific example in that case rather than "the good managers figure it out" because in my experience, the bad managers pretend to figure it out while the good managers admit that they can't figure it out. Worse still, if you tell your reports what those metrics are, they will optimize them to death, potentially tanking the product (I can increase my bug fix count if there are more bugs to fix...).

8 days ago

ozim

So for a specific example I would have to outline 1-2 years of history of a team and product as a starter.

Then I would have to go on outlining 6-12 months of trying stuff out.

Because if I just give "an example" I will get dozens of "smart ass" replies how this specific one did not work for them and I am stupid. Thanks but don't have time for that or for writing an essay that no one will read anyway and call me stupid or demand even more explanation. :)

8 days ago

seanmcdirmid

I get it, you are a true believer. I just disagree with your belief, and the fact that you can't bring credible examples to the table just reinforces that disagreement in my mind.

8 days ago

hshshshshsh

The thing is even bad managers can thrive in a company with a large userbase like Google. There is a lot of momentum built into product and engineering.

8 days ago

randomNumber7

I heard lines of code is a hot one.

8 days ago

hshshshshsh

So basically you have nothing useful to say?

8 days ago

ozim

I have to say that there is no solution that will work for "every team on every product".

This seems to be useful to understand and internalize that there are no simple answers like "use story points!".

There is also loads of people who don't understand that, so I stand by that is useful and important to repeat on every possible occasion.

8 days ago

yorwba

Economists are generally fine with defining productivity as the ratio of aggregate outputs to aggregate inputs.

Measuring it is not the hard part.

The hard part is doing anything about it. If you can't attribute specific outputs to specific inputs, you don't know how to change inputs to maximize outputs. That's what managers need to do, but of course they're often just guessing.

9 days ago

seanmcdirmid

Measuring human productivity is hard since we can't quantify output beyond silly metrics like lines of code written or amount of time speaking during meetings. Maybe if we were hunter/gatherers we could measure it by amount of animals killed.

9 days ago

rightbyte

> Maybe if we were hunter/gatherers we could measure it by amount of animals killed.

Even that would be hard since hunting is complex. If you are the one chasing the pray into the arms of someone else, you surely want it to be considered a team effort.

You need like 'blueberries picked'.

8 days ago

ozim

Well I pretty much see which team members are slacking and which are working hard.

But I do code myself, I write requirements so I do know which ones are trivial and which ones are not. I also see when there are complex migrations.

If you work in a group of people you will also get feedback - doesn't have to be snitching but still you get the feel who is a slacker in the group.

It is hard to quantify the output if you want to be removed from the group "give me a number" manager. If you actually do the work of a manager so you get the feel of the group like who is "Hermione Granger" nagging that others are slacking and disregard their opinion, you see who is the "silent doer" or you see who is "we should do it properly" bullshitter you can make a lot of meaningful adjustments.

8 days ago

yorwba

That's why upthread we have https://news.ycombinator.com/item?id=41992562

"You can [accurately and meaningfully measure software engineering productivity] - but not on the level of a single developer and you cannot use those measures to manage productivity of a specific dev."

At the level of a company like Google, it's easy: both inputs and outputs are measured in terms of money.

9 days ago

ozim

As you point back to my comment.

I am not Amazon person - but from my experience 2 pizza teams was what worked and I never implemented it myself just what I observed in wild.

Measuring Google in terms of money is also flawed, there is loads of BS hidden there and lots of people paying big companies more just because they are big companies.

8 days ago

js8

> Maybe if we were hunter/gatherers we could measure it by amount of animals killed.

So that's how animal husbandry came about!

8 days ago

beefnugs

haha that is not what managers do. Managers follow their KPIs exactly. If their KPIs say they get payed a bonus if profit goes up, then manager does smart number stuff and sees "if we fire 15% of employees this year, my pay goes up 63%" and then that happens

8 days ago

hshshshshsh

That sounds like a micro manager. I would imagine good engineers can figure out something for themselves.

8 days ago

ChoHag

[dead]

8 days ago

zac23or

I knew a superstar developer who worked on reports in an SQL tool. In the company metrics, the developer scored 420 points per month, the second developer scored 60 points. “Please learn how to score more points from the leader”, the boss would say.

The superstar developer’s secret… he would send blank reports to clients (who would only realize it days later, and someone else would end up redoing the report), and he would score many more points without doing anything. I’ve seen this happen a lot in many different companies. As a friend of mine used to say, “it’s very rare, but it happens all the time.”

I have no doubt that AI can help developers, but I don’t trust the metrics of the CEO or people who work on AI, because they are too involved in the subject.

8 days ago

svieira

> When people are pressured to meet a target value there are three ways they can proceed:

1) They can work to improve the system

2) They can distort the system

3) Or they can distort the data

https://commoncog.com/goodharts-law-not-useful/

8 days ago

torginus

Honestly I doubt he got away with this for long (unless it was a very dysfunctional org). Being the best gets you noticed (in a good way), and screwing people over gets you noticed too (in a bad way), the combination of the two paints a target on your back.

8 days ago

GeoAtreides

> Being the best gets you noticed (in a good way), and screwing people over gets you noticed too (in a bad way),

ah, to be young again...

8 days ago

torginus

I don't know what you're implying - I have had a few instances in my career when I went above and beyond and while I didn't receive too much praise for my efforts directly, after a while I noticed people who had no business knowing who I was, actually did.

Now, I was really bad at capitalizing on it, so nothing much came of it, but still, there are some positive things that higher-ups do notice.

7 days ago

UncleMeat

At scale you can do this in a bunch of interesting ways. For example, you could measure "amount of time between opening a crash log and writing the first character of a new change" across 10,000s of engineers. Yes, each individual data point is highly messy. Alice might start coding as a means of investigation. Bob might like to think about the crash over dinner. Carol might get a really hard bug while David gets a really easy one. But at scale you can see how changes in the tools change this metric.

None of this works to evaluate individuals or even teams. But it can be effective at evaluating tools.

9 days ago

fwip

There's lots of stuff you can measure. It's not clear whether any of it is correlated with productivity.

To use your example, a user with an LLM might say "LLM please fix this" as a first line of action, drastically improving this metric, even if it ruins your overall productivity.

8 days ago

valval

You can come up with measures for it and then watch them, that’s for sure.

9 days ago

lr1970

when metric becomes the target it ceases to be a good metric. when discovered how it works developers will type the first character immediately after opening the log.

edit: typo

9 days ago

joshuamorton

Only if the developer is being judged on the thing. If the tool is being judged on the thing, it's much less relevant.

That is, I, personally, am not measured on how much AI generated code I create, and while the number is non-zero, I can't tell you what it is because I don't care and don't have any incentive to care. And I'm someone who is personally fairly bearish on the value of LLM-based codegen/autocomplete.

8 days ago

valval

That was my point, veiled in an attempt to be cute.

8 days ago

LinuxBender

Is AI ready to crawl through all open source and find / fix all the potential security bugs or all bugs for that matter? If so will that become a commercial service or a free service?

Will AI be able to detect bugs and back doors that require multiple pieces of code working together rather than being in a single piece of code? Humans have a hard time with this.

- Hypothetical Example: Authentication bugs in sshd that requires a flaw in systemd which then requires a flaw in udev or nss or PAM or some underlying library ... but looking at each individual library or daemon there are no bugs that a professional penetration testing organization such as the NCC group or Google's Project Zero would find. In other words, will AI soon be able to find more complex bugs in a year than Tavis has found in his career and will they start to compete with one another and start finding all the state sponsored complex bugs and then ultimately be able to create a map that suggests a common set of developers that may need to be notified? Will there be a table that logs where AI found things that professional human penetration testers could not?

9 days ago

0points

No, that would require AGI. Actual reasoning.

Adversaries are already detecting issues tho, using proven means such as code review and fuzzing.

Google project zero consists of a team of rock star hackers. I don't see LLM even replacing junior devs right now.

8 days ago

paradox242

Seems like there is more gain on the adversary side of this equation. Think nation-states like North Korea or China, and commercial entities like Pegasus Group.

9 days ago

AnimalMuppet

Google's AI would have the advantage of the source code. The adversaries would not. (At least, not without hacking Google's code repository, which isn't impossible...)

9 days ago

saagarjha

FWIW: NSO is the group, Pegasus is their product

8 days ago

nycdatasci

You mention safety as #1, but my impression is that Google has taken a uniquely primitive approach to safety with many of their models. Instead of influencing the weights of the core model, they check core model outputs with a tiny and much less competent “safety model”. This approach leads to things like a text-to-image model that refuses to output images when a user asks to generate “a picture of a child playing hopscotch in front of their school, shot with a Sony A1 at 200 mm, f2.8”. Gemini has similar issue: it will stop mid-sentence, erase its entire response and then claim that something is likely offensive and it can’t continue.

The whole paradigm should change. If you are indeed responsible for developer tools, I would hope that you’re activity leveraging Claude 3.5 Sonnet and o1-preview.

8 days ago

wslh

As someone working in cybersecurity and actively researching vulnerability scanning in codebases (including with LLMs), I’m struggling to understand what you mean by “safe.” If you’re referring to detecting security vulnerabilities, then you’re either working on a confidential project with unpublished methods, or your approach is likely on par with the current state of the art, which primarily addresses basic vulnerabilities.

9 days ago

bcherny

How are you measuring productivity? And is the effect you see in A/B tests statistically significant? Both of these were challenging to do at Meta, even with many thousands of engineers —- curious what worked for you.

8 days ago

assanineass

Was this comment cleared by comms

8 days ago

[deleted]
8 days ago

bogwog

Is any of the AI generated code being committed to Google's open source repos, or is it only being used for private/internal stuff?

9 days ago

fhdsgbbcaA

I’ve been thinking a lot lately about how an LLM trained in really high quality code would perform.

I’m far from impressed with the output of GPT/Claude, all they’ve done is weight against stack overflow - which is still low quality code relative to Google.

What is probability Google makes this a real product, or is it too likely to autocomplete trade secrets?

9 days ago

hshshshshsh

Seems like everything is working out without any issues. Shouldn't you be a bit suspicious?

9 days ago

mysterydip

I assume the amount of monitoring effort is less than the amount of effort that would be required to replicate the AI generated code by humans, but do you have numbers on what that ROI looks like? Is it more like 10% or 200%?

9 days ago

ActionHank

Would you say that the efficiency gain is less than, equal to, or greater than the cost?

It's always felt like having AI in the cloud for better autocomplete is a lot for a small gain.

8 days ago

[deleted]
8 days ago

Twirrim

> We work very closely with Google DeepMind to adapt Gemini models for Google-scale coding and other Software Engineering usecases.

Considering how terrible and frequently broken the code that the public facing Gemini produces, I'll have to be honest that that kind of scares me.

Gemini frequently fails at some fairly basic stuff, even in popular languages where it would have had a lot of source material to work from; where other public models (even free ones) sail through.

To give a fun, fairly recent example, here's a prime factorisation algorithm it produce for python:

  # Find the prime factorization of n
  prime_factors = []
  while n > 1:
    p = 2
    while n % p == 0:
      prime_factors.append(p)
      n //= p
    p += 1
  prime_factors.append(n)
Can you spot all the problems?
9 days ago

ijidak

I'm the first to say that AI will not replace human coders.

But I don't understand this attempt to tell companies/persons that are successfully using AI that no they really aren't.

In my opinion, if they feel they're using AI successfully, the goal should be to learn from that.

I don't understand this need to tell individuals who say they are successfully using AI that, "no you aren't."

It feels like a form of denial.

Like someone saying, "I refuse to accept that this could work for you, no matter what you say."

8 days ago

kgeist

They probably use AI for writing tests, small internal tools/scripts, building generic frontends and quick prototypes/demos/proofs of concept. That could easily be that 25% of the code. And modern LLMs are pretty okayish with that.

9 days ago

gerash

I believe most people use AI to help them quickly figure out how to use a library or an API without having to read all their (often out dated) documentation instead of helping them solve some mathematical challenge

9 days ago

delfinom

I've never had an AI not just make up API when it didn't exist, instead of saying "it doesn't exist". Lol

8 days ago

taeric

If the documentation is out of date, such that it doesn't help, this doesn't bode well for the training data of the AI helping it get it right, either?

9 days ago

macintux

AI can presumably integrate all of the forum discussions talking about how people really use the code.

Assuming discussions don't happen in Slack, or Discord, or...

9 days ago

woodson

Unfortunately, it often hallucinates wrong parameters (or gets their order wrong) if there are multiple different APIs for similar packages. For example, there are plenty ML model inference packages, and the code suggestions for NVIDIA Triton Inference Server Python code are pretty much always wrong, as it generates code that’s probably correct for other Python ML inference packages with slightly different API.

8 days ago

jon_richards

I often find the opposite. Documentation can be up to date, but AI suggests deprecated or removed functions because there’s more old code than new code. Pgx v5 is a particularly consistent example.

8 days ago

randomNumber7

And all the code on which it was trained...

8 days ago

Capricorn2481

Forum posts can also be out of date.

8 days ago

randomNumber7

I think that too but google claims something else.

8 days ago

calf

We are sorely lacking a "Make Computer Science a Science" movement, the tech lead's blurb is par for the course, talking about "SWE productivity" with no reference to scientific inquiry and a foundational understanding of safety, correctness, verification, validation of these new LLM technologies.

8 days ago

almostgotcaught

Did you know that Google is a for-profit business and not a university? Did you know that most places where people work on software are the same?

8 days ago

zifpanachr23

So are most medical facilities. Somehow, the vibes are massively different.

8 days ago

almostgotcaught

That's rich? Never heard of the opioid crisis? Or the over-prescription of imaging tests?

8 days ago

calf

Did you know that Software Engineering is a university level degree? That it is a field of scientific study, with professors who dedicate their lives to it? What happens when companies ignore science and worse yet cause harm like pollution or medical malpractice, or in this case, spread Silicon Valley lies and bullshit???

Did you know? How WEIRD.

How about you not harass other commenters with such arrogantly ignorant sarcastic questions?? Or is that part of corporate "for-profit" culture too????

8 days ago

almostgotcaught

> Did you know that Software Engineering is a university level degree? That it is a field of scientific study, with professors who dedicate their lives to it?

So is marketing? So finance? So is petroleum engineering?

8 days ago

justinpombrio

> Can you spot all the problems?

You were probably being rhetorical, but there are two problems:

- `p = 2` should be outside the loop

- `prime_factors.append(n)` appends `1` onto the end of the list for no reason

With those two changes I'm pretty sure it's correct.

8 days ago

kenjackson

You don't need to append 'p' in the inner while loop more than once. Maybe instead of an array for keeping the list of prime factors do it in a set.

8 days ago

zeroonetwothree

It’s valid to return the multiplicity of each prime, depending on the goal of this.

8 days ago

rmbyrro

`n` isn't defined

8 days ago

justinpombrio

The implicit context that the poster removed (as you can tell from the indentation) was a function definition:

    def factorize(n):
      ...
      return prime_factors
8 days ago

[deleted]
8 days ago

dangsux

[dead]

8 days ago

senko

We collectively deride leetcoding interviews yet ask AI to flawlessly solve leetcode questions.

I bet I'd make more errors on my first try at it.

9 days ago

AnimalMuppet

Writing a prime-number factorization function is hardly "leetcode".

9 days ago

senko

I didn't say it's hard, but it's most definitely leetcode, as in "pointless algorithmic exercise that will only show you if the candidate recently worked on a similar question".

If that doesn't satisfy, here's a similar one at leetcode.com: https://leetcode.com/problems/distinct-prime-factors-of-prod...

I would not expect a programmer of any seniority to churn stuff like that and have it working without testing.

8 days ago

AnimalMuppet

> "pointless algorithmic exercise that will only show you if the candidate recently worked on a similar question".

I've been able to write one, not from memory but from first principles, any time in the last 40 years.

8 days ago

senko

Curious, I would expect a programmer of your age to remember Knuth's "beware of the bugs in above code, I have only proven it's correct but haven't actually run it".

I'm happy you know math, but my point before this thread got derailed was that we're holding (coding) AI to a higher standard than actual humans, namely to expect to write bug-free code.

8 days ago

0points

> my point before this thread got derailed was that we're holding (coding) AI to a higher standard than actual humans, namely to expect to write bug-free code

This seems like a very layman attitude and I would be surprised to find many devs adhering to this idea. Comments in this thread alone suggests that many devs on HN do not agree.

8 days ago

smrq

I hold myself to a higher standard than AI tools are capable of, from my experience. (Maybe some people don't, and that's where the disconnect is between the apologists and the naysayers?)

8 days ago

Jensson

Humans can actually run the code and knows what it should output. the LLM can't, and putting it in a loop against code output doesn't work well either since the LLM can't navigate that well.

8 days ago

eesmith

A senior programmer like me knows that primality-based problems like the one posed in your link are easily gamed.

Testing for small prime factors is easy - brute force is your friend. Testing for large prime factors requires more effort. So the first trick is to figure out the bounds to the problem. Is it int32? Then brute-force it. Is it int64, where you might have a value like the Mersenne prime 2^61-1? Perhaps it's time to pull out a math reference. Is it longer, like an unbounded Python int? Definitely switch to something like the GNU Multiple Precision Arithmetic Library.

In this case, the maximum value is 1,000, which means we can enumerate all distinct prime values in that range, and test for its presence in each input value, one one-by-one:

    # list from https://www.math.uchicago.edu/~luis/allprimes.html
    _primes = [
        2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59,
        61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131,
        137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197,
        199, 211, 223, 227, 229, 233, 239, 241, 251, 257, 263, 269, 271,
        277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353,
        359, 367, 373, 379, 383, 389, 397, 401, 409, 419, 421, 431, 433,
        439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509,
        521, 523, 541, 547, 557, 563, 569, 571, 577, 587, 593, 599, 601,
        607, 613, 617, 619, 631, 641, 643, 647, 653, 659, 661, 673, 677,
        683, 691, 701, 709, 719, 727, 733, 739, 743, 751, 757, 761, 769,
        773, 787, 797, 809, 811, 821, 823, 827, 829, 839, 853, 857, 859,
        863, 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 947, 953,
        967, 971, 977, 983, 991, 997]

    def distinctPrimeFactors(nums: list[int]) -> int:
        if __debug__:
            # The problem definition gives these constraints
            assert 1 <= len(nums) <= 10_000, "size out of range"
            assert all(2 <= num <= 1000 for num in nums), "num out of range"

        num_distinct = 0
        for p in _primes:
            for num in nums:
                if num % p == 0:
                    num_distinct += 1
                    break
        return num_distinct
That worked without testing, though I felt better after I ran the test suite, which found no errors. Here's the test suite:

    import unittest

    class TestExamples(unittest.TestCase):
        def test_example_1(self):
            self.assertEqual(distinctPrimeFactors([2,4,3,7,10,6]), 4)

        def test_example_2(self):
            self.assertEqual(distinctPrimeFactors([2,4,8,16]), 1)

        def test_2_is_valid(self):
            self.assertEqual(distinctPrimeFactors([2]), 1)

        def test_1000_is_valid(self):
            self.assertEqual(distinctPrimeFactors([1_000]), 2) # (2*5)**3

        def test_10_000_values_is_valid(self):
            values = _primes[:20] * (10_000 // 20)
            assert len(values) == 10_000
            self.assertEqual(distinctPrimeFactors(values), 20)

    @unittest.skipUnless(__debug__, "can only test in debug mode")
    class TestConstraints(unittest.TestCase):
        def test_too_few(self):
            with self.assertRaisesRegex(AssertionError, "size out of range"):
                distinctPrimeFactors([])
        def test_too_many(self):
            with self.assertRaisesRegex(AssertionError, "size out of range"):
                distinctPrimeFactors([2]*10_001)
        def test_num_too_small(self):
            with self.assertRaisesRegex(AssertionError, "num out of range"):
                distinctPrimeFactors([1])
        def test_num_too_large(self):
            with self.assertRaisesRegex(AssertionError, "num out of range"):
                distinctPrimeFactors([1_001])

    if __name__ == "__main__":
        unittest.main()
I had two typos in my test suite (an "=" for "==", and a ", 20))" instead of "), 20)"), and my original test_num_too_large() tested 10_001 instead of the boundary case of 1_001, so three mistakes in total.

If I had no internet access, I would compute that table thusly:

  _primes = [2]
  for value in range(3, 1000):
    if all(value % p > 0 for p in _primes):
        _primes.append(value)
Do let me know of any remaining mistakes.

What kind of senior programmers do you work with who can't handle something like this?

EDIT: For fun I wrote an implementation based on sympy's integer factorization:

    from sympy.ntheory import factorint
    def distinctPrimeFactors(nums: list[int]) -> int:
        distinct_factors = set()
        for num in nums:
            distinct_factors.update(factorint(num))
        return len(distinct_factors)
Here's a new test case, which takes about 17 seconds to run:

        def test_Mersenne(self):
            self.assertEqual(distinctPrimeFactors(
                [2**44497-1, 2,4,3,7,10,6]), 5)
8 days ago

atomic128

Empirical testing (for example: https://news.ycombinator.com/item?id=33293522) has established that the people on Hacker News tend to be junior in their skills. Understanding this fact can help you understand why certain opinions and reactions are more likely here. Surprisingly, the more skilled individuals tend to be found on Reddit (same testing performed there).

8 days ago

louthy

I’m not sure that’s evidence; I looked at that and saw it was written in Go and just didn’t bother. As someone with 40 years of coding experience and a fundamental dislike of Go, I didn’t feel the need to even try. So the numbers can easily be skewed, surely.

8 days ago

atomic128

Only individuals who submitted multiple bad solutions before giving up were counted as failing. If you look but don't bother, or submit a single bad solution, you aren't counted. Thousands of individuals were tested on Hacker News and Reddit, and surprisingly, it's not even close: Reddit is where the hackers are. I mean, at the time of the testing, years ago.

8 days ago

louthy

That doesn’t change my point. It didn’t test every dev on all platforms, it tested a subset. That subset may well have different attributes to the ones that didn’t engage. So, it says nothing about the audience for the forums as a whole, just the few thousand that engaged.

Perhaps even, there could be fewer Go programmers here and some just took a stab at it even though they don’t know the language. So it could just select for which forum has the most Go programmers. Hardly rigourous.

So I’d take that with a pinch of salt personally

8 days ago

atomic128

Agreed. But remember, this isn't the only time the population has been tested. This is just the test (from two years ago, in 2022) that I happen to have a link to.

8 days ago

louthy

The population hasn’t been tested. A subset has.

8 days ago

59nadir

It's also fine to be an outlier. I've been programming for 24 years and have been hanging out on HackerNews on and off for 11. HN was way more relevant to me 11 years ago than it is now, and I don't think that's necessarily only because the subject matter changed, but probably also because I have.

8 days ago

Izikiel43

How is that thing testing? Is it expecting a specific solution or actually running the code? I tried some solutions and it complained anyways

8 days ago

atomic128

The way the site works is explained in the first puzzle, "Hack This Site". TLDR, it builds and runs your code against a test suite. If your solutions weren't accepted, it's because they're wrong.

8 days ago

0xDEAFBEAD

Where is the data?

8 days ago

freilanzer

Yeah, this is useless.

8 days ago

[deleted]
8 days ago

gamesetmath

[flagged]

9 days ago

pixxel

[flagged]

9 days ago

devonbleak

It's Go. 25% of the code is just basic error checking and returning nil.

9 days ago

QuercusMax

In Java, 25% of the code is import statements and curly braces

9 days ago

layer8

You generally don’t write those by hand though.

I’m pretty sure around 50% of the code I write is already auto-complete, without any AI.

8 days ago

amomchilov

Exactly, you write them with AI

8 days ago

throwaway106382

IDEs have been auto completing braces, inserting imports and generating framework boilerplate for decades.

We don’t need AI for this and it’s 10x the compute to do it slower with AI.

LLMs are useful but they aren’t a silver bullet. We don’t need to replace everything with it just because.

8 days ago

philipwhiuk

Yeah, but the management achievement is to call 'autocomplete' AI.

AI doesn't mean LLM after all. AI means 'a computer thing'.

8 days ago

throwaway106382

I’ve been calling if-statements AI since before I graduated college

8 days ago

[deleted]
8 days ago

jansan

Simply strech your definition of AI and voilá, you are writing it with AI.

8 days ago

rwmj

The most important thing is to put out a press release about how half your code is written by AI.

8 days ago

contravariant

In lisp about 50% of the code is just closing parentheses.

8 days ago

harry8

Heh, but it can't be that, no reason to think llms can count brackets needing a close any more than they can count words.

8 days ago

int_19h

LLMs can count words (and letters) just fine if you train them to do so.

Consider the fact that GPT-4 can generate valid XML (meaning balanced tags, quotes etc) in base64-encoded form. Without CoT, just direct output.

8 days ago

maleldil

That's GPT-4, which you wouldn't use for in-line suggestions because it's too slow.

I don't know what model Copilot uses these days, but it constantly makes bracket mistakes in Python.

8 days ago

int_19h

You don't need a GPT-4-sized model to count brackets. You just need to make sure that your training data includes enough cases like that for NN to learn it. My point is that GPT-4 can do much more complicated things than that, so there's nothing specific about LMs that preclude them from doing this kind of stuff right.

7 days ago

overhead4075

Logically, it couldn't be 50% since that would imply that the other 50% would be open brackets and that would leave 0% room for macros.

8 days ago

philipwhiuk

That's just a rounding error ;)

8 days ago

[deleted]
8 days ago

xxs

Over 3 imports from the same package - use an asterisk.

8 days ago

NeoTar

Does auto-code generation count as AI?

9 days ago

remram

Another 60% is auto-generated protobuf/grpc code. Maybe protoc counts as "AI".

8 days ago

GeneralMayhem

Google does not check in protoc-generated code. It's all generated on demand by Blaze/Bazel.

8 days ago

remram

Oh thanks for the info.

On the other hand, that doesn't mean it doesn't count for the purpose of this press release/advertisement...

8 days ago

Groxx

new headline: Protoc Generates Many Times More Code Than Humans Or AI

(because it's like 50k lines regenerated for every build, everywhere, all the time)

4 days ago

hiddencost

Go is a very small fraction of the code at Google.

8 days ago

yangcheng

Having worked at both FAANG companies and startups, I can offer a perspective on AI's coding impact in different environments. At startups, engineers work with new tech stacks, start projects from scratch, and need to ship something quickly. LLMs can wrtie way more code. I've seen ML engineers build React frontends without any previous frontend experience, flutter developers write 100-line SQL queries for data analysis, with LLM 10x productivity for this type of work. At FAANG companies, codebases contain years of business logic, edge cases, and 'not-bugs-but-features.' Engineers know their tech stacks well, and legacy constraints make LLMs less effective, and can generate wrong code that needs to be fixed

8 days ago

davnicwil

It might not quite be there yet, but one key advantage large codebases have that I think LLMs in time will be able to better exploit is the detection of existing patterns - presuming they're consistent - and application to new code doing similar things or to fix bugs in existing code that deviates from the pattern in some way that causes a bug.

It's a different thing to what you're talking about, but it's one way I'd expect to see LLMs contribute a lot to productivity on larger codebases specifically.

8 days ago

mdgrech23

large application codebase - consistent - have you worked in the field? I feel like usually there are 3 or 4 patterns from different people/teams at different points in time that spearheaded a particular ideology about how things "should" be done.

8 days ago

dep_b

A quarter of all new code? Of course. Especially if you include all "smart autocomplete" code.

When dealing with a fermenting pile of technical debt? I expect very little. LLM's don't have application-wide context yet.

AI is definitely revolutionizing our field, but the same people that said that no-code tools and all of the other hype-of-the-decade technologies would make developers jobless are actually the people AI is making jobless.

Generate an opinion piece about how AI is going to make developers jobless, using AI? Less than a minute. And you don't need to maintain that article, once it's published, it's done.

While there's a tsunami of AI-generated almost-there projects coming that need to be moved to a shippable and sellable state. So I'm more afraid about the kind of work I'm going to get while still getting paid handsomely for my skills, than ever being jobless as the only guy that really understands the whole stack from top to bottom.

8 days ago

randomdata

At the end of the day an LLM is just a compiler anyway. The developer isn't going away even if 100% of the code is generated by LLMs, just as the developer didn't go away when we stopped spending our days flipping toggle switches.

8 days ago

dep_b

I'm actually surprised that _the others_ always think that the programmers somehow will make themselves obsolete first? If it gets cheaper to make software, more software will be made, until we reach the point again we're running short on people capable enough to keep it all running.

8 days ago

fzysingularity

While I get the MBA-speak of lines-of-code that AI is now able to accomplish, it does make me think about their highly-curated internal codebase that makes them well placed to potentially get to 50% AI-generated code.

One common misconception is that all LLMs are the same. The models are trained the same, but trained on wildly different datasets. Google, and more specifically the Google codebase is arguably one of the most curated, and iterated on datasets in existence. This is a massive lever for Google to train their internal code-gen models, that realistically could easily replace any entry-level or junior developer.

- Code review is another dimension of the process of maintaining a codebase that we can expect huge improvements with LLMs. The highly-curated commentary on existing code / flawed diff / corrected diff that Google possesses give them an opportunity to build a whole set of new internal tools / infra that's extremely tailored to their own coding standard / culture.

9 days ago

bqmjjx0kac

> that realistically could easily replace any entry-level or junior developer.

This is a massive, unsubstantiated leap.

8 days ago

risyachka

The issue is it doesn't really replace junior dev. You become one - as you have to babysit it all the time, check every line of code, and beg it to make it work.

In many cases it is counterproductive

8 days ago

throwaway106382

I’d take pair programming with a junior over a GPT bot any day.

8 days ago

neaanopri

I'd take coding by own damn self over either a junior or a gpt bot

8 days ago

jtbetz22

> Google codebase is arguably one of the most curated, and iterated on datasets in existence

I spent 12 years of my career in the Google codebase.

This assertion is technically correct in that google3 has been around for 20 years, and all code gets reviewed, but the implication that Google's codebase is a high-quality training set is not consistent with my experience.

8 days ago

unit149

Philosophically, these models are akin to scholars prior to differentiation during their course of study. Throttling data, depending on one's course of study, and this shifting of the period in history step-by-step. Either it's a tit-for-tat manner of exchange that the junior developer is engaged in, when overseeing every edit that an LLM has modified, or I'd assume that there are in-built methods of garbage collection, that another LLM evaluating a hash function partly identifying a block of tokenized internal code would be responsible for.

8 days ago

morkalork

Is the public gemini code gen LLM trained on their internal repo? I wonder if one could get it to cough up propriety code with the right prompt.

8 days ago

p1esk

I’m curious if Microsoft lets OpenAI train on GH private repos.

8 days ago

happyopossum

> Is the public gemini code gen LLM trained on their internal repo?

Nope

8 days ago

Taylor_OD

If we are talking about the boilerplate code and autofill syntax code that copilot or any other "AI" will offer me when I start typing... Then sure. Sounds about right.

The other 75% is the stuff you actually have to think about.

This feels like saying linters impact x0% of code. This just feels like an extension of that.

9 days ago

creativenolo

It probably does. But an amazing number of commenters think they are prompting the copy & pasting, and hoping for the best.

8 days ago

Kalabasa

Yep, a lot of headline readers here.

It's just a very advanced autocomplete, completely integrated into the internal codebase and IDE. You can read this on the research blog (maybe if everyone just read the blog).

e.g.

I start typing `var notificationManager`

It would suggest `= (Notification Manager) context.getSystemService(NOTIFICATION_MANAGER);`

If you've done Android then you know how much boilerplate there is to suggest.

I press Ctrl+Enter or something to accept the suggestion.

Voila, more than 50% of that code was written by AI.

> blindly committing AI code

Even before AI, no one blindly accepts autocomplete.

A lot of headline-readers seem to imagine some sort of semi-autonomous or prompt based code generation that writes whole blocks of code to then be blindly accepted by engineers.

8 days ago

skydhash

That makes a while since I’ve done Android, but I’m sure that this variable should be a property and be set as part of the lifecycle. And while Android (and any big project) is full of boilerplate, each line is subtly different or it would have already been abstracted in some base class. And even then, the code completion is already so good in Android Studio that you would have to be a complete junior (in this case, you wouldn’t know that the AI suggestion is good) to complain that writing code is slow. Most time spent is designing code, fixing subtle bugs, and refactoring to clean up the code.

8 days ago

esjeon

> The other 75% is the stuff you actually have to think about.

I’m pretty sure the actual ratio is much lower than that. In other words, LLMs are currently not good enough to remove the majority of chores, even with the state of the art model trained on highly curated dataset.

8 days ago

Taylor_OD

Have you used CoPilot with vs code? It's not perfect all the time but its autocomplete is right a significant amount of time.

4 days ago

imaginebit

I think he's trying to promote AI, somehow raises questions about thrir code quality among some

9 days ago

dietr1ch

I think it just shows how much noise there is in coding. Code gets reviewed anyways (although review quality was going down rapidly the more PMs where added to the team)

Most of the code must be what could be snippets (opening files and handling errors with absl::, and moving data from proto to proto). One thing that doesn't help here, is that when writing for many engineers on different teams to read, spelling out simple code instead of depending on too many abstractions seems to be preferred by most teams.

I guess that LLMs do provide smarter snippets that I don't need to fill out in detail, and when it understands types and whether things compile it gets quite good and "smart" when it comes to write down boilerplate.

9 days ago

ryoshu

Spoken like an MBA who counts lines of code.

9 days ago

pfannkuchen

It’s replaced the 25% previously copy pasted from stack overflow.

9 days ago

rkagerer

This may have been intended as a joke, but it's the only explanation that reconciles the quote for me.

8 days ago

brainwad

The split is roughly 25% AI, 25% typed, 50% pasted.

9 days ago

ttul

I wanted a new feature in our customer support console and the dev lead suggested I write a JIRA. I’m the CEO, so this is not my usual thing (and probably should not be). I told Claude what I wanted and pasted in a .js file from the existing project so that it would get a sense of the context. It cranked out a fully functional React component that actually looks quite nice too. Two new API calls were needed, but Claude helpfully told me that. So I pasted the code sample and a screenshot of the HTML output into the JIRA and then got Claude to write me the rest of the JIRA as well.

Everyone knows this was “made by AI” because there’s no way in hell I would ever have the time. These models might not be able to sit there and build an entire project from scratch yet, but if what you need is some help adding the next control panel page, Claude’s got your back on that.

8 days ago

simianparrot

You’re also the CEO so chances are the people looking at that ticket aren’t going to tell you the absolute mess the AI snippet actually is and how pointless it was to include it instead of a simple succinct sentence explaining the requirements.

If you’re not a developer chances are very high the code it produces will look passable but is actually worthless — or worse, it’s misleading and now a dev has to spend more time deciphering the task.

8 days ago

ttul

LoL, I really appreciate this comment. My team is very frank with me about code quality and they said Claude’s work looked pretty good — this time. But I’ll take your recommendation to heart for next time.

7 days ago

JonChesterfield

> Everyone knows this was “made by AI” because there’s no way in hell I would ever have the time.

Doubtful. A decent fraction of the people reading it will guess that you've wasted your time writing incoherent nonsense in the jira. Engineers don't usually have much insight into what the C suite are doing. It would be a prudent move to spend the couple of seconds to write "something like this AI sketch:" before the copy&paste.

8 days ago

gloflo

> Everyone knows this was “made by AI” because ...

They should know because you told them so.

Having to decipher weird code only to discover it was not written by a human is not nice.

8 days ago

zac23or

> dev lead suggested I write a JIRA. I’m the CEO, so this is not my usual thing (and probably should not be)

Fascinating point of view.

8 days ago

nosbo

I don't write code as I'm a sysadmin. Mostly just scripts. But is this like saying intellisense writes 25% of my code? Because I use autocomplete to shortcut stuff or to create a for loop to fill with things I want to do.

9 days ago

n_ary

You just made it less attractive to the target corps who are to buy this product from Google. Saying, intellisense means corps already have license of various of these and some are even mostly free. Saying AI generate our 25% code sounds more attractive to corps, because it feels like something new and novel and you can imagine laying off 25% of the personnel and justify buying this product from Google.

When someone who uses a product says it, there is a 50% chance of it being true, but when someone far away from the user says it, it is 100% promotion of product and setup for trust building for a future sale.

9 days ago

coldpie

Looks like it's an impressive autocomplete feature, yeah. Check out the video about halfway down here: https://research.google/blog/ai-in-software-engineering-at-g... (linked from other comment https://news.ycombinator.com/item?id=41992028 )

Not what I thought when I heard "AI coding", but seems pretty neat.

9 days ago

stephenr

> I don't write code as I'm a sysadmin. Mostly just scripts.

.... so what do you put in your scripts if not code?

8 days ago

nosbo

Don't disagree. But I think it's pretty accepted that sysadminy scripts and full blown applications are different. Just don't want to give the wrong impression that I know what I'm talking about I guess.

7 days ago

bongodongobob

The colloquial difference is a few lines, maybe a dozen or two, for maintenance and one off stuff, not a full blown application.

8 days ago

0xCAP

People overestimate faang. There are many talents working there, sure, but a lot of garbage gets pumped into their codebases as well.

9 days ago

mattgreenrocks

Devs who pride themselves on their capacity for rational thought seem to forget that regression to the mean applies everywhere...even to the places that they aspire to.

8 days ago

fuzzfactor

>a lot of garbage gets pumped into their codebases

I would imagine it always has.

>Google CEO says more than a quarter of the company's new code is created by AI

It may very well be starting to become apparent anyway :\

8 days ago

summerlight

In Google, there is a process called "Large Scale Change" which is primarily meant for trivial/safe but extremely tedious code changes that potentially span over the entire monorepo. Such as foundational API changes, trivial optimization, code style etc etc. This is a perfectly suitable for LLM driven code changes (in fact I'm seeing more and more of LLM generated LSC) and I guess a large fraction of mentioned "AI generated codes" can be actually attributable to this.

8 days ago

bubaumba

yeh, but the main problem is the quality. with algorithm bug can be fixed. with llm it's more complicated. in practice they do some mistakes consistently, and in some cases cannot recover even with assistance. (don't take me wrong, I'm very happy with the results most of the time)

8 days ago

afro88

You just fix the mistakes and keep moving. It's like autocomplete where you still need to fill in the blanks or select a different completion.

8 days ago

saagarjha

Spotting and fixing mistakes in a LSC is no small feat.

8 days ago

drunken_thor

A company that used to be the pinnacle of software development is now just generating code in order to sell their big data models. Horrifying. Devastating.

8 days ago

motoxpro

People talk about how AI is bad at generating non-trivial code, but why are people using it to generate non-trivial code?

25% of coding is just the most basic boilerplate. I think of AI not as a thinking machine but as a 1000 WPM boilerplate typer.

If it is halucinating, you're trying to make it do stuff that is too complex.

8 days ago

ghosty141

But for this boiletplate creating a few snippets in your code generally works better. Especially if things change you dont have to retrain your model.

Thats my main problem: for trivial things it works but isnt much better than conventional tools, for hard things it just produces incorrect code such that writing it from scratch barely makes a difference

8 days ago

motoxpro

I think thats a great analogy.

What would it look like if I could have 3-500 snippets instead of 30. Those 300 are things that I do all over my codebase e.g. same basic where query but in the context of whatever function I am in, a click handler with the correct types for that purpose, etc.

There is no way I can have enough hotkeys or memorize that much, and I truly can't type faster than I can hit tab.

I don't need it to think for me. Most coding (front-end/back-end web) involves typing super basic stuff, not writing complex algorithms.

This is where the 10-20% speed-up comes in. On average I am just typing 20% faster by hitting tab.

8 days ago

globular-toast

Were people seriously writing this boilerplate by hand up until this point? I started using snippets and stuff more than 15 years ago!

8 days ago

ausbah

i would be may more impressed if LLMs could do code compression. more code == more things that can break, and when llms can generate boatloads of it with a click you can imagine what might happen

9 days ago

Scene_Cast2

This actually sparked an idea for me. Could code complexity be measured as cumulative entropy as measured by running LLM token predictions on a codebase? Notably, verbose boilerplate would be pretty low entropy, and straightforward code should be decently low as well.

9 days ago

jeffparsons

Not quite, I think. Some kinds of redundancy are good, and some are bad. Good redundancy tends to reduce mistakes rather than introduce them. E.g. there's lots of redundancy in natural languages, and it helps resolve ambiguity and fill in blanks or corruption if you didn't hear something properly. Similarly, a lot of "entropy" in code could be reduced by shortening names, deleting types, etc., but all those things were helping to clarify intent to other humans, thereby reducing mistakes. But some is copy+paste of rules that should be enforce in one place. Teaching a computer to understand the difference is... hard.

Although, if we were to ignore all this for a second, you could also make similar estimates with, e.g., gzip: the higher the compression ratio attained, the more "verbose"/"fluffy" the code is.

Fun tangent: there are a lot of researchers who believe that compression and intelligence are equivalent or at least very tightly linked.

9 days ago

8note

Interpreting this comment, it would predict low complexity for code copied unnecessarily.

I'm not sure though. If it's copied a bunch of times, and it actually doesn't matter because each usecase of the copying is linearly independent, does it matter that it was copied?

Over time, you'd still see copies being changed by themselves show up as increased entropy

9 days ago

david-gpu

> Could code complexity be measured as cumulative entropy as measured by running LLM token predictions on a codebase? Notably, verbose boilerplate would be pretty low entropy, and straightforward code should be decently low as well.

WinRAR can do that for you quite effectively.

9 days ago

malfist

Code complexity can already be measured deterministically with cyclomatic complexity. No need to use an AI fuzzy logic at this. Especially when they're bad at math.

9 days ago

contravariant

There's nothing fuzzy about letting an LLM determine the probability of a particular piece of text.

In fact it's the one thing they are explicitly designed to do, the rest is more or less a side-effect.

8 days ago

ks2048

I agree. It seems like counting lines of generated code is like counting bytes/instructions of compiled code - who cares? If “code” becomes prompts, then AI should lead to much smaller code than before.

I’m aware that the difference is that AI-generated code can be read and modified by humans. But that quantity is bad because humans have to understand it to read or modify it.

9 days ago

TZubiri

What's that line about accounting for lines of code on the wrong side of the balance sheet?

9 days ago

latexr

> If “code” becomes prompts, then AI should lead to much smaller code than before.

What’s the point of shorter code if you can’t trust it to do what it’s supposed to?

I’ll take 20 lines of code that do what they should consistently over 1 line that may or may not do the task depending on the direction of the wind.

9 days ago

[deleted]
9 days ago

AlexandrB

Exactly this. Code is a liability, if you can do the same thing with less code you're often better off.

9 days ago

EasyMark

Not if it’s already stable and has been running for years. Legacy doesn’t necessarily mean “need replacement because of technical debt”. I’ve seen lots of people want to replace code that has been running basically bug free for years because “there are better coding styles and practices now”

9 days ago

8note

How would it know which edge cases are being useful and which ones aren't?

I understand more code as being more edge cases

9 days ago

wvenable

More code could just be useless code that no longer serves any purpose but still looks reasonable to the naked eye. An LLM can certainly figure out and suggest maybe some conditional is impossible given the rest of the code.

I can also suggest alternatives, like using existing library functions for things that might have been coded manually.

9 days ago

ekwav

Or just refactor to use early returns

8 days ago

asah

meh - the LLM code I'm seeing isn't particularly more verbose. And as others have said, if you want tighter code, just add that to the prompt.

fun story: today I had an LLM write me a non-trivial perl one-liner. It tried to be verbose but I insisted and it gave me one tight line.

9 days ago

randomNumber7

I cannot imagine this to be true, cause imo current LLM's coding abilities are very limited. It definitely makes me more productive to use it as a tool, but I use it mainly for boilerplate and short examples (where I had to read some library documentation before).

Whenever the problem requires thinking, it horribly fails because it cannot reason (yet). So unless this is also true for google devs, I cannot see that 25% number.

9 days ago

Wheatman

My guess is that they counted each line of code made by an engineer using AI coding tools.

Besides, even google employees write a lot of boilerplate, especially android IIRC, not to mention simple but essential code, so AI can prevent carpal tunnel for the junior devs working on that.

8 days ago

zifpanachr23

Roughly only one quarter (assuming they are outputting similar amounts of code as non AI using engineers) of engineers actually using AI regularly for coding is a statistic that is actually believable to me based on my own experience. A lot of small teams have their "AI guy" who has drunk the kool aid, but it's not as widespread as HackerNews would make you think.

8 days ago

chrisjj

> My guess is that they counted each line of code made by an engineer using AI coding tools.

... and forgot to count the Delete presses.

8 days ago

jdefr89

80% or more of the code you write day to day is just grunt work. Boring code that has, for the most part, already been written in some form such that it was copied from Google or StackOverflow. AI is basically a shortcut to using that stuff..

8 days ago

d_burfoot

I'd be far more impressed if the CEO said "The AI deleted a quarter of our company's code".

8 days ago

zh3

Yes, like the old story about why not to measure productivity by LoC generated.

https://www.folklore.org/Negative_2000_Lines_Of_Code.html

8 days ago

avsteele

Everyone here is arguing about the average AI code quality and I'm here just not believing the claim.

Is Google out there monitoring the IDE activity of every engineer, logging the amount of code created, by what, lines, characters, and how it was generated? Dubious.

8 days ago

Jyaif

> Is Google out there monitoring the IDE activity of every engineer, logging the amount of code created, by what, lines, characters, and how it was generated

A good chunk () of their code goes in a centralized repo, and is written via a centralized web IDE. So measuring everything you mentioned is not hard.

() Android, Chrome, and other similar projects are exceptions.

8 days ago

avsteele

How does this allow them to measure the % generated by AI tooling?

8 days ago

Jyaif

The IDE integrates the AI generator, like copilot.

Yes, they'll miss AI-generated code that is copy pasted, so they only have a lower bound of AI-generated code.

8 days ago

kunley

Very good point. How was the 25% measured?

8 days ago

xen0

I really do wonder who these engineers are, that the current 'AI' tools are able to write so much of their code.

Maybe my situation is unusual; I haven't written all that much code at Google lately, but what I do write is pretty tied to specific details of the program and the AI auto completion is just not that useful. Sometimes it auto completes a method signature correctly, but it never gets the body right (or even particularly close).

And it routinely making up methods or fields on objects I want to use is anti productive.

8 days ago

sbochins

It’s probably code that was previously machine generated that they’re now calling “AI Generated”.

9 days ago

frank_nitti

That would make sense and be a good use case, essentially doing what OpenAPI generators do (or Yeoman generators of yore), but less deterministic I’d imagine. So optimistically I would guess it covers ground that isn’t already solved by mainstream tools.

For the example of generating an http app scaffolding from an openapi spec, it would probably account for at least 25% of the text in the generated source code. But I imagine this report would conveniently exclude the creation of the original source yaml driving the generator — I can’t imagine you’d save much typing (or mental overhead) trying to prompt a chatbot to design your api spec correctly before the codegen

9 days ago

arethuza

I'm waiting for some Google developer to say "More than a quarter of the CEOs statements are now created by AI"... ;-)

8 days ago

freilanzer

I'd say most CEO statements are quite useless already, as they're mostly corporate newspeak.

8 days ago

prmoustache

Aren't we just talking about auto completion?

In that case those 25% are probably the very same 25% that were automatically generated by LTP based auto-completion.

8 days ago

alienchow

When setting up unit tests traditionally took more time and LOC than the logic itself, LLMs are particularly useful.

1. Paste in my actual code.

2. Prompt: Write unit tests, test tables. Include scenarios: A, B, C, D, E. Include all other scenarios I left out, isolate suggestions for review.

I used to spend the majority of the coding time writing unit tests and mocking test data, now it's more like 10%.

8 days ago

arkh

> Paste in my actual code.

> Prompt: Write unit tests

TDD in shambles. What you'd like is:

> Give your specs to some AI

> Get a test suite generated with all edge cases accounted for

> Code

8 days ago

alienchow

Matter of preference. I've found TDD to be inflexible for my working style. But your suggestion would indeed work for a staunch TDD practitioner.

8 days ago

makerofthings

I keep trying to use these things but I always end up back in vim (in which I don't have any ai autocomplete set up.)

The AI is fine, but every time it makes a little mistake that I have to correct it really breaks my flow. I might type a lot more boilerplate without it but I get better flow and overall that saves me time with less mistakes.

8 days ago

lysace

Github Copilot had an outage for me this morning. It was kind of shocking. I now believe this metric. :-)

I'll be looking into ways of running a local LLM for this purpose (code assistance in VS Code). I'm already really impressed with various quite large models running on my 32 GB Mac Studio M2 Max via Ollama. It feels like having a locally running chatgpt.

9 days ago

evoke4908

Ollama, docker and "open webui".

It immediately works out of the box and that's it. I've been using local LLMs on my laptop for a while, it's pretty nice.

The only thing you really need to worry about is VRAM. Make sure your GPU has enough memory to run your model and that's pretty much it.

Also "open webui" is the worst project name I've ever seen.

9 days ago

kulahan

I'm very happy to hear this; maybe it's finally time to buy a ton of ram for my PC! A local, private LLM would be great. I'd try talking to it about stuff I don't feel comfortable being on OpenAI's servers.

9 days ago

lysace

Getting lots of ram will let you run large models on the CPU, but it will be so slow.

The Apple Silicon Macs have this shared memory between CPU and GPU that let's the (relatively underpowered GPU, compared to a decent Nvidia GPU) run these models at decent speeds, compared with a CPU, when using llama.cpp.

This should all get dramatically better/faster/cheaper within a few years, I suspect. Capitalism will figure this one out.

9 days ago

kulahan

Interesting, so this is a Mac-specific solution? That's pretty cool.

I assume, then, that the primary goal would be to drop in the beefiest GPU possible when on windows/linux?

9 days ago

evilduck

There's nothing Mac specific about running LLMs locally, they just happen to be a convenient way to get a ton of VRAM in a single small power efficient package.

In Windows and Linux, yes you'll want at least 12GB of VRAM to have much of any utility but the beefiest consumer GPUs are still topping out at 24GB which is still pretty limiting.

8 days ago

lysace

With Windows/Linux I think the issue is that NVidia is artificially limiting the amount of onboard RAM (they want to sell those devices for 10x more to openai, etc) and that AMD for whatever reason can't get their shit together.

I'm sure that there are other much more knowledgeable people here though, on this topic.

9 days ago

rustcleaner

This is why the DMCA must be repealed.

8 days ago

rcarmo

There is a running gag among my friends using Google Chat (or whatever their corporate IM tool is now called) that this explains a lot of what they’re experiencing while using it…

9 days ago

tdeck

I didn't know anyone outside Google actually used that...

9 days ago

skywhopper

All this means is that 25% of code at Google is trivial boilerplate that would be better factored out of their process rather than tasking inefficient LLM tools with. The more they are willing to leave the “grunt work” to an LLM, the less likely they are to ever eliminate it from the process.

8 days ago

mirkodrummer

Sometimes I wonder why we would want LLMs spit out human readable code. Wouldn’t be a better future where LLMs generate highly efficient machine code and eventually we read the “source map” for debugging? Wasn’t source code just for humans?

8 days ago

sparcpile

You just reinvented the compiler.

8 days ago

palata

Because you can't trust what the LLM generates, so you have to read it. Of course the question then is whether you can trust your developer or not.

8 days ago

mirkodrummer

I’d rather reply with LLMs aren’t just capable of that. They’re okay with Python and JS simply because there’s a lot of training data out in the open. My point was that it seems like we’re delegating the future to tools that could generate critical code using languages originally thought to be easy to learn.. it doesn’t make sense

8 days ago

mattxxx

I think they spit out human-readable code, because they've been tried on human authors.

But you make an interesting point: eventually AI will be making for other AI's + machines, and human verification can be an after thought.

8 days ago

standardUser

I use it all the time for work. Not much for actual code that goes into production, but a lot for "tell me what this does" or "given x, how do I do y". It speeds me up a ton. I'll also have it do code review when I'm uncertain about something, asking if there's any bugs or inefficiencies in a given chunk of code. I've actually found it to be more reliable about code than more general topics. Though I'm using it in a fairly specific way with code, versus asking for deep information about history for example, where is frequently gets facts very wrong.

8 days ago

redbell

Wait a second—didn't Google warn its employees against using AI-generated code? (https://news.ycombinator.com/item?id=36399021). What had changed?! Has Gemini now surpassed Bard in capabilities? Did they manage to resolve the copyright issues? Or maybe they've noticed a boost in productivity? I'm not sure, but let’s see if other big tech companies would follow this path.

8 days ago

KeplerBoy

Different audiences.

You tell investors that AI is freaking magic and going to usher in an age of savings and productivity gains.

You tell your developers that it's a neat autocomplete, they should use carefully.

8 days ago

ken47

Without context, not very meaningful. Does this simple measure lines of code? Characters written? Is it “oversuggesting” code that it shouldn’t be confident in? Does this code make it into production or is a large percentage of it fixed by humans at great cost?

Google, and really, the whole financial machine has a vested interest playing up the potential of AI. Unfortunate that it isn’t being given time to grow organically.

4 days ago

SavageBeast

Google needs to bolster their AI story and this is good click bait. I'm not buying it personally.

9 days ago

hggigg

I reckon he’s talking bollocks. Same as IBM was when it was about to disguise layoffs as AI uplift and actually just shovelled the existing workload on to other people.

9 days ago

submeta

Pandora‘s box has been opened.

Some say „this is mere tab completion“, some say „it won’t replace the senior engineer.“

I can remember how many fiercely argued 2 years ago that GenAI and Copilot are producing garbage. But here we are: These systems improve the workflow of creating / editing code enormously. You seniors might not be affected, but there are endless many scenarios where it replaces the junior who‘d write code to transform data, write scripts, write one-off scripts, or even write boilerplate, test code and what not.

And this is only after a short time. I cannot even imagine what we‘ll have ten years from now where we can propably have much larger context windows where the system can „unterstand“ the whole code base, not just parts.

I am sorry for low level engineering jobs, but I am super exited as well.

With GebAI I have been writing super complex Elisp code to automate workflow in Emacs, or VBA scripts in Excel, or Bash scripts I wouldn’t have otherwise been able to write, or JavaScript, or quickly write Python code to solve very tricky problems (and I am very high level in Python), or even React code for web apps for my personal use.

The future looks exiting to me.

8 days ago

Capricorn2481

> I can remember how many fiercely argued 2 years ago that GenAI and Copilot are producing garbage. But here we are: These systems improve the workflow of creating / editing code enormously

This is the disconnect. I, along with others, haven't seen this yet. I'm begging to see it because I'd love to automate my work away, but I can't. This comment comes off as hand-wavy to me because it says "here we are" as if Google saying their AI works is evidence itself and not a statement that requires evidence.

7 days ago

gmm1990

I don’t fully understand the workflow were you hand boiler plate code off to a junior wouldn’t the communication overhead be higher than writing it yourself. Certainly llms have valid uses but I see improving junior productivity more than senior productivity

8 days ago

[deleted]
7 days ago

LudwigNagasena

How much of that generated code is `if err != nil { return err }`?

9 days ago

yearolinuxdsktp

Of course when so much of it is written in verbose-as-fuck languages like Java and Go, you’d be stupid not to let computers generate lack chunks of it. It’s sad, we as humans stopped trying to do better at better coding languages. At least Java is slowly making progress—-maybe in another 10 years, it will finally become a high level language. Go never tried to be one. You surprised you need AI to tab complete your boilerplate?!

Financial incentives at large companies are not aligned with low volumes of code. There are no rewards for less code. People get rewarded for another bullshit framework to slap on their resume. Box me in, no, cube me in to a morass of a thick ingress layer, that uses 1/8th of my CPU.

8 days ago

thelittleone

I understand CEOs need to promote their companies, but it's notable that Google - arguably the world's leading information technology company - fell behind in AI development under Pichai's leadership. Now he's touting Google's internal capabilities, yet Gemini is being outperformed by relative newcomers like Anthropic and OpenAI.

His position seems secure despite these missteps, which highlights an interesting double standard: there appears to be far more tolerance for strategic failures at the CEO level compared to the rigorous performance standards expected of engineering staff.

8 days ago

jdefr89

To be fair the paper that helped launch LLMs to a new level was from Google. “All You Need Is Attention”, Keras… They fell behind when it comes to marketing AI maybe…

8 days ago

holtkam2

Can we also see the stats for how much code used to come from StackOverflow? Probably 25%

8 days ago

tgtweak

I feel like, given my experience lately with all the API models currently available, that this is only a fact if the models google is using internally are SIGNIFICANTLY better than what is available publicly even on closed models.

Claude 3.5-sonnet (latest) is barely able to stay coherent on 500 LOC files, and easily gets tripped up when there are several files in the same directory.

I have tried similarly with o1-preview and 4o, and gemini pro...

If google is using a 5M token context window LLM with 100k+ token-output trained on all the code that is not public... then I can believe this claim.

This just goes to show how critical of an issue this is that these models are behind closed doors.

8 days ago

nomel

> This just goes to show how critical of an issue this is that these models are behind closed doors.

How is competitive advantage, using in-house developed/funded tools, a critical issue? Every company has tools that only they have, that they pay significantly for to develop, and use extensively. It's can often be the primary thing that really differentiates companies who are all doing similar things.

8 days ago

[deleted]
8 days ago

mjhay

100% of Sundar Pichai could be replaced by an AI.

8 days ago

elzbardico

Well. When I developed in Java, I think that Eclipse did similar figures circa 2005.

8 days ago

sreitshamer

Software development isn't a code-production activity, it's a knowledge-acquisition activity. It involves refactoring and deleting code too. I guess the AI isn't helping with that?

8 days ago

syngrog66

> "and we continue to be laser-focused on building great products."

NO! False. I can confirm they are not. I've known of several major obvious unfixed bugs/flaws in Google apps for years. and in the last year or so especially theres been an explosion in the number of head-scratching, jaw-dropping fails and UX anti-patterns in their code. GMail, Search, Maps and Android are now riddled with them.

on Sundar Pichai's watch he's been devolving Google to be yet another Microsoft type in terms of quality, care and taste.

8 days ago

deterministic

Not impressed. I currently auto generate 90% or more of the code I need to implement business solutions. With no AI involved. Just high level declarations of intent auto translated to C++/Typescript/…

8 days ago

agomez314

I thought great engineers reduce the amount of new code in a codebase?

8 days ago

jeffbee

It's quite amusing to me because I am old enough to remember when Copilot emerged the HN mainthought was that it was the death sentence for big corps, the scrappy independent hacker was going to run circles around them. But here we see the predictable reality: an organization that is already in an elite league in terms of developer velocity gets more benefit from LLM code assistants than Joe Hacker. These technologies serve to entrench and empower those who are already enormously powerful.

8 days ago

twis

How much code was "written by" autocomplete before LLMs came along? From my experience, LLM integration is advanced autocomplete. 25% is believable, but misleading.

9 days ago

scottyah

My linux terminal tab-complete has written 50% of my code

9 days ago

blibble

this is the 2024 version of "25% of our code is now produced by outsourced resources"

8 days ago

arminiusreturns

I was a luddite about the generative LLMs at first, as a crusty sysadmin type. I came around and started experimenting. It's been a boon for me.

My conclusion is that we are at the first wave of a split between those who use LLMs to augment their abilities and knowledge, and those who delay. In cyberpunk terminally, it's aug-tech, not real AGI. (and the lesser ones code abilities and simpler the task, the more benefit, it's an accelerator)

8 days ago

skatanski

I think at this moment, this sounds more like "quarter of the company's new code is created using stackoverflow and other forums. Many many people use all these tools to find information, as they did using stackoverflow a month ago, but now suddenly we can call it "created by AI". It'd be nice to have a distinction. I'm saying this, while being very excited about using LLMs as a developer.

8 days ago

sanj

Caveat: I formerly worked at Google.

What missing is that code being written by AI may have less of an impact than dataset that are developed or refined by AI. Consider examples like a utility function's coefficients, or the weights of a model.

As these are aggressively tuned using ML feedback, they'll influence far more systems than raw code.

8 days ago

nenadg

Internet random person (me) says more than 99% of Google's 25%+ code written by AI has already been written by humans.

8 days ago

jmartin2683

I’m gonna bet this is a lie.

8 days ago

freedomben

I don't think it's a lie, but I do think it's very misleading. With common languages probably 25% of code can be generated by an AI, but IME it's mostly just boilerplate or some pattern that largely just saves typing time, not programming/thinking time. In other words it's the 25% lowest hanging fruit, so thinking like "1/4 of programming is now done by AI" is misleading. It's probably more like 5 to 10 percent.

8 days ago

hsuduebc2

I believe it is absolutely suitable for generating controllers in java spring or connecting to database and making a simple query which from my experience as an ordinary enterprise developer in Fintech is most of the job. Making these huge applicatins is a lot of repetitive work and integrations. Not a work that usually requires some advanced logic.

8 days ago

baalimago

To me, programming assistants have two usecases:

1. Generate unit tests for modules which are already written to be tested 2. Generate documentation for interfaces

Both of these require quite deep knowledge in what to write, then it simply documents and fills in the blanks using the context which already has been laid out.

8 days ago

agilob

So we're using CoL as a metric now?

8 days ago

piyuv

I wish Tim Cook would reply with “more than half of all iMessages are created with autocomplete”

8 days ago

Hamuko

How do Google's IP lawyers feel about a quarter of the company's code not being copyrightable?

8 days ago

sjs382

This was one of my first thoughts, too. In what ways can this contaminate their codebase? What if they use AI to add uncopyrightable code to GPL projects?

8 days ago

horns4lyfe

I’d bet at least a quarter of their code is class definitions, constructors, and all the other minutiae files required for modern software, so that makes sense. But people weren’t writing most of that before either, we’ve had autocomplete and code geb for a long time.

8 days ago

ThinkBeat

This is quite interesting to know.

I will be curious to see if it has any impact positive or negative over a couple of years.

Will the code be more secure since the AI does not make the mistakes humans do?

Or will the code, not well enough understood by the employees, exposes exploits that would not be there?

Will it change average up time?

9 days ago

kunley

what makes you think that current direction of AI development would lead to making less mistakes than humans do, as opposed to repeating same miskates plus hallucinating more?

8 days ago

Starlevel004

No wonder search barely works anymore

9 days ago

tabbott

Without a clear explanation of methodology, this is meaningless. My guess is this statistic is generated using misleading techniques like classifying "code changes generated by existing bulk/automated refactoring tools" as "AI generated".

8 days ago

mastazi

The auto-linter in my editor probably generates a similar percentage of the characters I commit.

8 days ago

davidclark

If I tab complete my function and variable symbols, does my lsp write 80%+ of my lines of code?

8 days ago

nine_zeros

Writing more code means more needs to be maintained and they are cleverly hiding that fact. Software is a lot more like complex plumbing than people want to admit:

More lines == more shit to maintain. Complex lines == the shit is unmanageable.

But wall street investors love simplistic narratives such as More X == More revenue. So here we are. Pretty clever marketing imo.

9 days ago

_spduchamp

I can ask AI to generate the same code multiple times, and get new variations on programming style each time, and get the occasional solution that is just not quite right but sort of works. Sounds like a recipe for a gloppy mushy mess of style salad.

9 days ago

hiptobecubic

I've had mixed results writing "normal" business logic in c++, but i gotta say, for SQL it's pretty incredible. Granted SQL has a lot of boilerplate and predictable structure, but it saves a ton of time honestly.

8 days ago

mjbale116

If you manage to convince software engineers that you are doing them a favour by employing them then they will approach any workplace negotiations with a specific mindset which will make them grab the first number it gets thrown to them.

These statements are brilliant.

9 days ago

akira2501

These statements rely on an unchallenged monopoly position. This is not sustainable. These statements will hasten the collapse.

9 days ago

[deleted]
9 days ago

[deleted]
8 days ago

echoangle

Does protobuf count as AI now?

8 days ago

Terr_

My concern is that "frequently needed and immediately useful results" is strongly correlated to "this code should already be abstracted away into a library by now."

Search Copy-Paste as a Service is hiding a deeper issue.

8 days ago

fredgrott

Kind of useless stat given how much code a typical dev refactors....

8 days ago

zxilly

As a go developer, Copilot write 100% "if err != nnil for me

8 days ago

Kiro

I find it interesting that the people who dismiss the utility of AI are being so aggressive, sarcastic and hateful about it. Why all the anger? Where's the curiosity?

8 days ago

oglop

No surprise. I give my career about 2 years before I’m useless.

9 days ago

k4rli

Seems just overhyped tech to push up stock prices. It was already claimed 2 years ago that half of the jobs would be taken by "AI" but barely any have and AI has barely improved since GPT3.5. Latest Anthropic is only slightly helpful for software development, mostly for unusual bug investigations and logs analysis, at least in my experience.

9 days ago

phi-go

They still need someone to write 75% of the code.

9 days ago

cebert

Did AI have to go thru several rounds of Leetcode interviews?

9 days ago

me551ah

AI has boosted my productivity but only marginally. Earlier I used to copy paste stuff from stackoverflow and now AI generates that for me.

8 days ago

hi_hi

> More than a quarter of new code created at Google is generated by AI, said CEO Sundar Pichai...

How do they know this? At face value, it sounds like alot, but it only says "new code generated". Nothing about code making it into source control or production, or even which parts of googles vast business units.

For all we know, this could be the result of some internal poll "Tell us if you've been using Goose recently" or some marketing analytics on the Goose "Generate" button.

It's puff piece to put Google back in the lime light, and everyone is lapping it up.

8 days ago

ChrisArchitect

Related:

Alphabet ($GOOG) 2024 Q3 earnings release

https://news.ycombinator.com/item?id=41988811

9 days ago

wokkaflokka

No wonder their products are getting worse and worse...

8 days ago

tremorscript

Sounds about right and it explains a lot about the current quality of google products and google search. :-)

7 days ago

okokwhatever

People still don't understand those who pay the bills are those who claim developers are less and less necessary. It doesn't matter how much we love our job and how much we care for quality, at the end those who pay take more care of reducing workforce for something potentially free or cheap. We are less needed, less cared and less seen as engineers. We are just a cost in a wrong column of Quickbooks. Get use to it.

8 days ago

teknopaul

I'd say the same. But 90% of my time not writing code. It is mostly time wasted with github and k8s build issues.

8 days ago

gilfoyle

This is like saying more than a quarter of the code is from oss, examples and stackoverflow before LLMs.

8 days ago

silexia

Is this why Google search results are so bad now?

5 days ago

matt3210

NVIDIA CEO said there would be no more developers too and it totally wasn't a marketing thing.

8 days ago

meindnoch

I saw code on master which was parsing HTML with regex. The author was proud that this code was mostly generated by AI.

:)

8 days ago

nottorp

The protobuf boilerplate, right? :)

8 days ago

erlend_sh

Self-interested hyperbole aside, I think that’s a laughably low number for what is now effectively an ‘AI Company’. I’m sure >95% of Google employees use Google (well, at least until recent years).

If this stuff really works as well as these companies claim it does, wouldn’t their entire workforce excitedly be using these tools already?

8 days ago

flessner

"AI generated code" essentially means using Github Copilot or an alternative - these barely write a function without errors, nor are they even close to implementing a new feature autonomously.

I expect these tools to improve productivity for new-ish developers, however for anyone that is literate in a programming language the effect is marginal at best ("Copilot pause" etc.)

8 days ago

hollywood_court

Cursor and v0.dev write 95% of the code for myself and the two other devs on my team.

8 days ago

chabes

When Google announced their big layoffs, I noted the timing in relation to some big AI announcements. People here told me I was crazy for suggesting that corporations could replace employees with AI this early. Now the CEO is confirming that more than a quarter of new code is created by AI. Can’t really deny that reality anymore folks.

9 days ago

hbn

I'd suggest the bigger factor in those layoffs is the money was made in earlier covid years where money was flowing and everyone was overhiring to show off record growth, then none of those employees had any justification for being kept around and were just a money sink so they fired them all.

Not to mention Elon publicly demonstrated losing 80% of staff when he took over twitter and - you can complain about his management all you want - as someone who's been using it the whole way through, from a technical POV their downtimes and software quality has not been any worse and they're shipping features faster. A lot of software companies are overstaffed, especially Google who has spent years paying people to make projects just to get a PO promoted, then letting the projects rot and die to be replaced by something else. That's a lot of useless work being done.

9 days ago

akira2501

> Can’t really deny that reality anymore folks.

You have to establish that the CEO is actually aware of the reality and is interested in accurately conveying that to you. As far as I can tell there is absolutely no reason to believe any part of this.

9 days ago

paradox242

When leaders without the requisite technical knowledge are making decisions then the question of whether AI is capable of replacing human workers is orthogonal to the question of whether human workers will be replaced by AI.

9 days ago

robohoe

Who claims that he is speaking the truth and not some marketing jargon?

9 days ago

randomNumber7

People who have replaced 25% of their brain with ai.

8 days ago

foobarian

The real question is, what fraction of the company’s code is deleted by AI :-)

8 days ago

bryanrasmussen

Public says more than a quarter of Google's search results are absolute crap.

8 days ago

Timber-6539

All this talk means nothing until Google gives AI permissions to push to prod.

8 days ago

haccount

No wonder Gemini is a garbage fire if had chatgpt write the code for it.

8 days ago

1GZ0

I wonder how much of that code is boilerplate vs. actual functionality.

8 days ago

marstall

maps with recent headlines about AI improving programmer productivity 20-30%.

which puts it in line with previous code-generation technologies i would imagine. I wonder which of these increased productivity the most?

- Assembly Language

- early Compilers

- databases

- graphics frameworks

- ui frameworks (windows)

- web apps

- code generators (rails scaffolding)

- genAI

9 days ago

akira2501

Early Compilers. By a wide margin. They are the enabling factor for everything that comes below it. It's what allows you to share library interfaces and actually use them in a consistent manor and across multiple architectures. It entirely changed the shape of software development.

The gap between "high level assembly" and "compiled language" is about as large as it gets.

9 days ago

soperj

The real question is how many lines of code was it responsible for removing.

9 days ago

defactor

Try any AI tool to write basic factor code.hallucinates most of the time

8 days ago

[deleted]
9 days ago

otabdeveloper4

That explains a lot about Google's so-called "quality".

9 days ago

zxvkhkxvdvbdxz

I feel this made me loose the respect I still had for Google

8 days ago

niobe

This explains a LOT about Google's quality decline.

8 days ago

socrateslee

or saying that most of the Google engineers are using tools like copilot, and they use the copilot just as everyone else.

7 days ago

mgaunard

AI is pretty good at helping you manage a messy large codebase and making it even more messy and verbose.

Is that a good thing though? We should work and making code small and easy to manage without AI tools.

8 days ago

fortylove

Is this why we finally got darkmode in gcal?

8 days ago

rockskon

No shit a quarter of Google's new code is created by AI. How else do you explain why Google search has been so aggressively awful for the past 5~ years?

Seriously. The penchant for outright ignoring user search terms, relentlessly forcing irrelevant or just plain wrong information on users, and the obnoxious UI changes on YouTube! If I'm watching a video on full screen I have explicitly made it clear that I want YouTube to only show me video! STOP BRINGING UP THE FUCKING VIDEO DESCRIPTION TO TAKE UP HALF THE SCREEN IF I TRY TO BRIEFLY SWIPE TO VIEW THE TIME OR READ A MESSAGE.

I have such deep-seated contempt for AI and it's products for just how much worse it makes people's lives.

8 days ago

remram

Yeah that might explain some of the loss of quality. Google apps and sites used to be solid, now they are full of not-breaking-but-annoying bugs like race conditions (don't press buttons too fast), display glitches, awful recommendations, and other usability problems.

Then again, their devices are also coming out with known fatal design flaws, like not being able to make phone calls, or the screen going black permanently.

8 days ago

dickersnoodle

That explains a lot, actually.

7 days ago

nektro

Google used to be respected, a place so highly sought after that engineers who worked there were revered like wizards. oh how they've fallen :(

8 days ago

[deleted]
8 days ago

fmardini

Proto-plumbing is very LLM amenable

8 days ago

ThinkBeat

So um. With making this public statement, can we expect that 25% of "the bottom" coders at Google will soon be granted a lot more time and ability to spend time with their loves ones.

9 days ago

shane_kerns

It's no wonder that their search absolutely sucks now. Duckduckgo is so much better in comparison now.

8 days ago

marstall

first thought is that much of that 25% is test code for non-ai-gen code...

9 days ago

evbogue

I'd be turning off the autocomplete in my IDE if I was at Google. Seems to double as a keylogger.

9 days ago

[deleted]
9 days ago

octacat

It is visible...

7 days ago

marviel

> 80% at Reasonote

8 days ago

tylerchilds

as a consumer, i never could have guessed

8 days ago

annlee2019

google CEO doesn't write code

7 days ago

anacrolix

Puts on Google

8 days ago

sheeshkebab

and it shows… Google codebases I see in the wild are the worst - jumbled mess of hard to read code.

8 days ago

psunavy03

And yet the 2024 State of DevOps report THAT GOOGLE PRODUCES has a butt-ton of caveats about the effectiveness of GenAI . . .

8 days ago

AI_beffr

i like how people say that ai can only write "trivial" code well or without mistakes. but what about from the point of view of the AI? writing "trivial" code is probably almost exactly as much of a challenge as writing the most complex code a human could ever write. the scales are not the same. dont allow yourself to feel so safe..

8 days ago

Capricorn2481

You think when people say AI can only write trivial code that they are writing from the perspective of AI, where trivial is actually impressive? That's is backward ass logic.

7 days ago

AI_beffr

no im saying they are anthropomorphizing the capabilities of these AIs which disguises how advanced they really are.

6 days ago

Capricorn2481

Not really what you said. In any case, people aren't doing that, they are just pointing out that AI writes poor code beyond very basic things. That's not Anthropomorphizing.

6 days ago

AI_beffr

it is what i said exactly and they are doing it.

6 days ago

jdmoreira

I would prefer if he was more competent and made the stock price go up. I guess grifters are going to grift

8 days ago

hodder

The market would be even more shocked to learn that another 30% is pasted in from Stack Overflow!

8 days ago

AmazingTurtle

Yeah, go ahead and lay off another 25% of development staff and see how well AI coders perform.:))

8 days ago

sigmonsays

imho code that is written by AI is code that is not worth having.

8 days ago

fennecbutt

That explains a lot.

8 days ago

ajkjk

Well yeah he sells AI and wants you to believe in it so the stock price stays good.

8 days ago

est

Now maintain quarter of your old code base with AI, don't shut down services randomly.

8 days ago

skrebbel

To my experience, AIs can generate perfectly good code relatively easy things, the kind you might as well copy&paste from stackoverflow, and they'll very confidently generate subtly wrong code for anything that's non-trivial for an experienced programmer to write. How do people deal with this? I simply don't understand the value proposition. Does Google now have 25% subtly wrong code? Or do they have 25% trivial code? Or do all their engineers babysit the AI and bugfix the subtly wrong code? Or are all their engineers so junior that an AI is such a substantial help?

Like, isn't this announcement a terrible indictment of how inexperienced their engineers are, or how trivial the problems they solve are, or both?

9 days ago

toasteros

> the kind you might as well copy&paste from stackoverflow

This bothers me. I completely understand the conversational aspect - "what approach might work for this?", "how could we reduce the crud in this function?" - it worked a lot for me last year when I tried learning C.

But the vast majority of AI use that I see is...not that. It's just glorified, very expensive search. We are willing to burn far, far more fuel than necessary because we've decided we can't be bothered with traditional search.

A lot of enterprise software is poorly cobbled together using stackoverflow gathered code as it is. It's part of the reason why MS Teams makes your laptop run so hot. We've decided that power-inefficient software is the best approach. Now we want to amplify that effect by burning more fuel to get the same answers, but from an LLM.

It's frustrating. It should be snowing where I am now, but it's not. Because we want to frivolously chase false convenience and burn gallons and gallons of fuel to do it. LLM usage is a part of that.

9 days ago

jcgrillo

What I can't wrap my head around is that making good, efficient software doesn't (by and large) take significantly longer than making bloated, inefficient enterprise spaghetti. The problem is finding people to do it with who care enough to think rigorously about what they're going to do before they start doing it. There's this bizarre misconception popular among bigtech managers that there's some tunable tradeoff between quality and development speed. But it doesn't actually work that way at all. I can't even count anymore how many times I've had to explain how taking this or that locally optimal shortcut will make it take longer overall to complete the project.

In other words, it's a skill issue. LLMs can only make this worse. Hiring unskilled programmers and giving them a machine for generating garbage isn't the way. Instead, train them, and reject low quality work.

8 days ago

aleph_minus_one

> What I can't wrap my head around is that making good, efficient software doesn't (by and large) take significantly longer than making bloated, inefficient enterprise spaghetti. The problem is finding people to do it with who care enough to think rigorously about what they're going to do before they start doing it.

I don't think finding such programmers is really difficult. What is difficult is finding such people if you expect them to be docile to incompetent managers and other incompetent people involved in the project who, for example, got their position not by merit and competence, but by playing political games.

8 days ago

giantg2

"What I can't wrap my head around is that making good, efficient software doesn't (by and large) take significantly longer than making bloated, inefficient enterprise spaghetti."

In my opinion the reason we get enterprise spaghetti is largely due to requirement issues and scope creep. It's nearly impossible to create a streamlined system without knowing what it should look like. And once the system gets to a certain size, it's impossible to get business buy-in to rearchitect or refactor to the degree that is necessary. Plus the full requirements are usually poorly documented and long forgotten by that time.

8 days ago

jcgrillo

When scopes creep and requirements change, simply refactor. Where is it written in The Law that you have to accrue technical debt? EDIT: I'm gonna double down on this one. The fact that your organization thinks they can demand of you that you can magically weathervane your codebase to their changeable whims is evidence that you have failed to realistically communicate to them what is actually possible to do well. The fact that they think it's a move you can make to creep the scope, or change the requirements, is the problem. Every time that happens it should be studied within the organization as a major, costly failure--like an outage or similar.

> it's impossible to get business buy-in to rearchitect or refactor to the degree that is necessary

That's a choice. There are some other options:

- Simply don't get business buy-in. Do without. Form a terrorist cell within your organization. You'll likely outpace them. Or you'll get fired, which means you'll get severance, unemployment, a vacation, and the opportunity to apply to a job at a better company.

- Fight viciously for engineering independence. You business people can do the businessing, but us engineers are going to do the engineering. We'll tell you how we'll do it, not the other way.

- Build companies around a culture of doing good, consistent work instead of taking expedient shortcuts. They're rare, but they exist!

8 days ago

aleph_minus_one

> Fight viciously for engineering independence.

Or simply find a position in an industry or department where you commonly have more independence. In my opinion this fight is not worth it - look for another position instead is typically easier.

8 days ago

llm_trw

>When scopes creep and requirements change, simply refactor.

Congratulations, you just refactored out a use case which was documented in a knowledge base which has been replaced by 3 newer ones since then, happens once every 18 months and makes the company go bankrupt if it isn't carried out promptly.

The type of junior devs who think that making code tidy is fixing the application are the type of dev who you don't let near the heart of the code base, and incidentally the type who are best replaced with code gen AI.

8 days ago

wpietri

Refactoring is improving the design of existing code. It shouldn't change behavior.

And regardless, the way you prevent loss of important functionality isn't by hoping people read docs that no longer exist. It's by writing coarse-grained tests that makes sure the software does the important things. If a programmer wants to change something that breaks a test like that, they go ask a product manager (or whatever you call yours) if that feature still matters.

And if nobody can say whether a feature still matters, the organization doesn't have a software problem, it has a serious management problem. Not all the coding techniques in the world can fix that.

8 days ago

jcgrillo

If you don't understand your systems well enough to comfortably refactor them, you're losing the war. I probably should have put "simply" in scare quotes, it isn't simple--and that's the point. Responding to unreasonable demands, like completely changing course at the 11th hour, shouldn't come at a low price.

8 days ago

galdosdi

It's a market for lemons.

Without redoing their work or finding a way to have deep trust (which is possible, but uncommon at a bigcorp) it's hard enough to tell who is earnest and who is faking it (or buying their own baloney) when it comes to propositions like "investing in this piece of tech debt will pay off big time"

As a result, if managers tend to believe such plans, bad ideas drive out good and you end up investing in a tech debt proposal that just wastes time. Burned managers therefore cope by undervaluing any such proposals and preferring the crappy car that at least you know is crappy over the car that allegedly has a brand new 0 mile motor on it but you have no way of distinguishing from a car with a rolled back odometer. They take the locally optimal path because it's the best they can do.

It's taken me 15 years of working in the field and thinking about this to figure it out.

The only way out is an organization where everyone is trusted and competent and is worthy of trust, which again, hard to do at most random bigcorps.

This is my current theory anyway. It's sad, but I think it kind of makes sense.

8 days ago

jcgrillo

Soviet vs NATO. The Soviet management style is micromanaging exactly how to do everything from the rear. The NATO style is delegating to the front line ranks.

Being good at the NATO style of management means focusing on the big picture--what, when, why--and leaving how to the people actually doing it.

8 days ago

wpietri

Agreed.

The way I explain this to managers is that software development is unlike most work. If I'm making widgets and I fuck up, that widget goes out the door never to be seen again. But in software, today's outputs are tomorrow's raw materials. You can trade quality for speed in the very short term at the cost of future productivity, so you're really trading speed for speed.

I should add, though, that one can do the rigorous thinking before or after the doing, and ideally one should do both. That was the key insight behind Martin Fowler's "Refactoring: Improving the Design of Existing Code". Think up front if you can, but the best designs are based on the most information, and there's a lot of information that is not available until later in a project. So you'll want to think as information comes in and adjust designs as you go.

That's something an LLM absolutely can't do, because it doesn't have access to that flow of information and it can't think about where the system should be going.

8 days ago

jcgrillo

> the best designs are based on the most information, and there's a lot of information that is not available until later in a project

This is an important point. I don't remember where I read it, but someone said something similar about taking a loss on your first few customers as an early stage startup--basically, the idea is you're buying information about how well or poorly your product meets a need.

Where it goes wrong is if you choose not to act on that information.

7 days ago

wpietri

For sure. Or, worse, choose to run a company in such a way that anybody making choices is insulated from that information.

7 days ago

c0balt

It's relatively easy to find a programmer(s) who can realize enterprise project X, it's hard to find a programmer(s) who cares about X. Throwing an increased requirement like speed at it makes this worse because it usually ends up burning out both ends of the equation.

8 days ago

jihadjihad

> The problem is finding people to do it with who care enough to think rigorously

> ...

> train them, and reject low quality work.

I agree very strongly with both of these points.

But I've observed a truth about each of them over the last decade-plus of building software.

1) very few people approach the field of software engineering with anything remotely resembling rigor, and

2) there is often little incentive to train juniors and reject subpar output (move fast and break things, etc.)

I don't know where this takes us as an industry? But I feel your comment on a deep level.

8 days ago

jcgrillo

> 1) very few people approach the field of software engineering with anything remotely resembling rigor

This is a huge problem. I don't know where it comes from, I think maybe sort of learned helplessness? Like, if systems are so complex that you don't believe a single person can understand it then why bother trying anyway? I think it's possible to inspire people to not accept not understanding. That motivation to figure out what's actually happening and how things actually work is the carrot. The stick is thorough, critical (but kind and fair) code--and, crucially, design--review, and demanding things be re-done when they're not up to par. I've been extremely lucky in my career to have had senior engineers apply both of these tools excellently in my general direction.

> 2) there is often little incentive to train juniors and reject subpar output (move fast and break things, etc.)

One problem is our current (well, for years now) corporate culture is this kind of gig-adjacent-economy where you're only expected to stick around for a few years at most and therefore in order to be worth your comp package you need to be productive on your first day. Companies even advertise this as a good thing "you'll push code to prod on your first day!" It reminds me of those scammy books from when I was a kid in the late 90s "Learn C In 10 Days!".

8 days ago

wpietri

> This is a huge problem. I don't know where it comes from

I think it's a bunch of things, but one legitimate issue is that software is stupidly complex these days. I had the advantage of starting when computers were pretty simple and have had a chance to grow along with it. (And my dad started when you could still lift up the hood and look at each bit. [1])

When I'm working with junior engineers I have a hard time even summing up how many layers lie beneath what they're working on. And so much of what they have to know is historically contingent. Just the other day I had to explain what LF and CR mean and how it relates to physical machinery that they probably won't see outside of a museum: https://sfba.social/@williampietri/113387049693365012

So I get how junior engineers struggle to develop a belief that the can sort it all out. Especially when so many people end up working on garbage code, where little sense is to be had. It's no wonder so many turn to cargo culting and other superstitious rituals.

[1] https://en.wikipedia.org/wiki/Magnetic-core_memory

8 days ago

steve_adams_86

I agree as well. These are actually things that bother me a lot about the industry. I’d love to write software that should run problem-free in 2035, but the reality is almost no one cares.

I’ve had the good fortune of getting to write some firmware that will likely work well for a long time to come, but I find most things being written on computers are written with (or very close to) the minimum care possible in order to get the product out. Clean up is intended but rarely occurs.

I think we’d see real benefits from doing a better job, but like many things, we fail to invest early and crave immediate gratification.

8 days ago

karolinepauls

> very few people approach the field of software engineering with anything remotely resembling rigor, and

I have this one opinion which I would not say at work:

In software development it's easy to feel smart because what you made "works" and you can show "effects".

- Does it wrap every failable condition in `except Exception`? Uhh, but look, it works.

- Does it define a class hierarchy for what should be a dictionary lookup? It works great tho!

- Does it create a cyclic graph of objects calling each other's methods to create more objects holding references to the objects that created them? And for what, to produce a flat dictionary of data at end of the day? But see, it works.

this is getting boring, maybe just skip past the list

- Does it stuff what should be local variables and parameters in self, creating a big stateful blob of an object where every attribute is optional and methods need to be called in the right order, otherwise you get an exception? Yes, but it works.

- Does it embed a browser engine? But it works!

The programmer, positively affirmed, continues spewing out crap, while the senior keep fighting fires to keep things running, while insulating the programmer from the taste of their own medicine.

But more generally, it's hard to expect people to learn how to solve problems simply if they're given gigantic OO languages with all the features and no apparent cost to any of them. People learn how to write classes and then never learn get good at writing code with a clear data flow.

Even very bright people can get fall for this trap because engineering isn't just about being smart but about using intelligence and experience to solve a problem while minmaxing correctly chosen properties. Those properties should generally be: dev time, complexity (state/flow), correctness, test coverage, ease of change, performance (anything else?). Anyway, "Affirming one's opinions about how things should be done" isn't one of them.

8 days ago

mos_basik

The whole one about the stateful blob of an object with all optional attributes got me real good. Been fighting that for years. But the dev that writes this produces code faster than me and understands parts of the system no one else does and doesn't speak great English, so it continues. And the company is still afloat. So who's right in the end? And does it matter?

8 days ago

karolinepauls

I don't know who's right but I know that it's the ergonomics of programming languages that make producing stateful blobs fast and easy that are in the wrong.

8 days ago

jcgrillo

You know it's a problem when you have to read a book having couple hundred pages to learn how to hold it right ;)

7 days ago

A4ET8a8uTh0

<< Instead, train them, and reject low quality work.

Ahh, well, in order to save money, training is done via an online class with multiple choice questions, or, if your company is like mine and really committed to making sure that you know they take your training seriously, they put portions of a generic book on 'tech Z' in pdf spread spread over a drm ridden web pages.

As for code, that is reviewed, commented and rejected by llms as well. It is used to be turtles. Now it truly is llms all the way down.

That said, in a sane world, this is what should be happening for a company that actually wants to get good results over time .

8 days ago

noisy_boy

> The problem is finding people to do it with who care enough to think rigorously about what they're going to do before they start doing it.

There is no incentive to do it. I worked that way, focused on quality and testing and none of my changes blew up in production. My manager opined that this approach is too slow and that it was ok to have minor breakages as long as they are fixed soon. When things break though, it's blame game all around. Loads of hypocrisy.

8 days ago

sethammons

"Slow is smooth and smooth is fast"

8 days ago

jcgrillo

It's true every single time.

8 days ago

chongli

we've decided we can't be bothered with traditional search

Traditional search (at least on the web) is dying. The entire edifice is drowning under a rapidly rising tide of spam and scam sites. No one, including Google, knows what to do about it so we're punting on the whole project and hoping AI will swoop in like deus ex machina and save the day.

9 days ago

photonthug

Maybe it is naive but I think search would probably work again if they could roll back code to 10 or 15 years ago and just make search engines look for text in webpages.

Google wasn’t crushed by spam, they decided to stop doing text search and build search bubbles that are user specific, location-specific, decided to surface pages that mention search terms in metadata instead of in text users might read, etc. Oh yeah, and about a decade before LLMs were actually usable, they started to sabotage simple substring searches and kind of force this more conversational interface. That’s when simple search terms stopped working very well, and you had to instead ask yourself “hmm how would a very old person or a small child phrase this question for a magic oracle”

This is how we get stuff like: Did you mean “when did Shakespeare die near my location”? If anyone at google cared more about quality than printing money, that thirsty gambit would at least be at the bottom of the page instead of the top.

8 days ago

hughesjj

I remember in like 5th grade rural PA schools learning about Boolean operators in search engines and falling in love with them. For context, they were presenting alta vista and yahoo kids search as the most popular with Google being a "simple but effective new search platform" we might want to check out.

By the time I graduated highschool you already couldn't trust that Boolean operators would be treated literally. By the time I graduated college, they basically didn't seem to do anything, at best a weak suggestion.

Nowadays quotes don't even seem to be consistently honored.

8 days ago

II2II

Even though I miss using boolean operators in search, I doubt that it was ever sustainable outside of specialized search engines. Very few people seem to think in those terms. Many of those who do would still have difficulty forming complex queries.

I suspect the real problem is that search engines ceased being search engines when they stopped taking things literally and started trying to interpret what people mean. Then they became some sort of poor man's AI. Now that we have LLMs, of course it is going to replace the poor excuse for search engines that exist today. We were heading down that road already, and it actually summarizes what is out there.

8 days ago

jordanb

People were learning. Just like with mice and menus, people are capable of learning new skills and querying search engines was one. I remember when it was considered a really "n00b" thing to type a full question into a search engine.

Then Google decided to start enforcing that, because they had this idea that they would be able to divine your "intent" from a "natural question" rather than just matching documents including your search terms.

8 days ago

layer8

> just make search engines look for text in webpages.

Google’s verbatim search option roughly does that for me (plus an ad blocker that removes ads from the results page). I have it activated by default as a search shortcut.

(To activate it, one can add “tbs=li:1” as a query parameter to the Google search URL.)

8 days ago

alex1138

To me the stupidest thing was the removal of things like + and -. You can say it's because of Google+ but annoyingly duckduckgo also doesn't seem to honor it. Kagi seems to and I hope they don't follow the others down the road of stupid

8 days ago

jcgrillo

> ?tbs=li:1

Thank you, this is almost life-alteringly good to know.

8 days ago

photonthug

Funny, I can’t even test this because I’d need to know another neat trick to get my browser to let me actually edit the URL.

Seems that Firefox on mobile allows editing the url for most pages, but on google search results pages, the url bar magically turns into a did-you-mean alternate search selector where I cannot see nor edit a url. Surprised but not surprised.

Sure, there’s a work around for this too, somehow. But I don’t want to spend my life collecting and constantly updating a huge list of temporary hacks to fix things that others have intentionally broken.

8 days ago

layer8

You can select verbatim search manually on the Google results page under Search tools > All results > Verbatim. You can also have a bookmark with a dummy search activating it, so you can then type your search terms into the Google search field instead of into the address bar.

Yes, it’s annoying that you can’t set it as the default on Google search itself.

8 days ago

tru3_power

Wow what? Thanks!

8 days ago

CapeTheory

> Maybe it is naive but I think search would probably work again if they could roll back code to 10 or 15 years ago and just make search engines look for text in webpages.

Even more naive, but my personal preference: just ban all advertising. The fact that people will pay for ChatGPT implies people will also pay for good search if the free alternative goes away.

8 days ago

Atreiden

It's working for Kagi

8 days ago

masfuerte

Google results are not polluted with spam because Google doesn't know how to deal with it.

Google results are polluted with spam because it is more profitable for Google. This is a conscious decision they made five years ago.

9 days ago

chongli

because it is more profitable for Google

Then why are DuckDuckGo results also (arguably even more so) polluted with spam/scam sites? I doubt DDG is making any profit from those sites since Google essentially owns the display ad business.

8 days ago

JohnDone

Ddg is actually Bing. Search as a service.

8 days ago

djvuvtgcuehb

And Bing is google.

8 days ago

redwall_hp

If you own the largest ad network that spam sites use and own the traffic firehose, pointing the hose at the spam sites and ensuring people spend more time clicking multiple results that point to ad-filled sites will make you more money.

Google not only has multiple monopolies, but a cut and dry perverse incentive to produce lower quality results to make the whole session longer instead of short and effective.

8 days ago

skissane

I personally think a big problem with search is major search engines try to be all things to all people and hence suffer as a result.

For example: a beginner developer is possibly better served by some SEO-heavy tutorial blog post; an experienced developer would prefer results weighted towards the official docs, the project’s bug tracker and mailing list, etc. But since less technical and non-technical people vastly outnumber highly technical people, Google and Bing end up focusing on the needs of the former, at the cost of making search worse for the later.

One positive about AI: if an AI is doing the search, it likely wants the more advanced material not the more beginner-focused one. It can take more advanced material and simplify it for the benefit of less experienced users. It is (I suspect) less likely to make mistakes if you ask it to simplify the more advanced material than if you just gave it more beginner-oriented material instead. So if AI starts to replace humans as the main clients of search, that may reverse some of the pressure to “dumb it down”.

8 days ago

photonthug

> But since less technical and non-technical people vastly outnumber highly technical people, Google and Bing end up focusing on the needs of the former, at the cost of making search worse for the later.

I mostly agree with your interesting comment, and I think your analysis basically jives with my sibling comment.

But one thing I take issue with is the idea that this type of thing is a good faith effort, because it’s more like a convenient excuse. Explaining substring search or even include/exclude ops to children and grandparents is actually easy. Setting preferences for tutorials vs API docs would also be easy. But companies don’t really want user-directed behavior as much as they want to herd users to preferred content with algorithms, then convince the user it was their idea or at least the result of relatively static ranking processes.

The push towards more fuzzy semantic search and “related content” everywhere is not to cater to novice users but to blur the line between paid advertisement and organic user-directed discovery.

No need to give megacorp the benefit of the doubt on stuff like this, or make the underlying problems seem harder than they are. All platforms land in this place by convergent evolution wherein the driving forces are money and influence, not insurmountable technical difficulties or good intentions for usability.

8 days ago

consp

> For example: a beginner developer is possibly better served by some SEO-heavy tutorial blog post

Good luck finding those, you end op with SEO spam and clone page spam. These days you have to look for unobvious hidden meanings which only relate to your exact problem to find what you are looking for.

I have the strong feeling search these days is back to the Altavista era. You'd have to use trickery to find what you were looking for back then as well. Too bad + no longer works in google due to their stupid naming of a dead product (no, literal is not the same and no replacement).

8 days ago

tru3_power

Yeah but this is just the name of the game. How can you even stop SEO style gamification at this point? I’m sure even LLMs are vulnerable/have been trained on SEO bs. End of the day it takes an informed user. Remember back in the day? Don’t trust the internet? I think that mindset will become the main school of thought once again. Which tbh, I think maybe a good thing.

8 days ago

skydhash

> Traditional search (at least on the web) is dying.

That's not my experience at all. While there are scammy sites, using the search engines as an index instead of an oracle still yields useful results. It only requires to learn the keywords which you can do by reading the relevant materials .

8 days ago

chongli

How do you read the relevant materials if you haven’t found them yet? It’s a chicken and egg problem. If your goal is to become an expert in a subject but you’re currently a novice, search can’t help you if it’s only giving you terrible results until you “crack the code.”

4 days ago

rubyfan

AI will make the problem of low quality, fake, fraudulent and arbitrage content way worse. I highly doubt it will improve searching for quality content at all.

8 days ago

AnimalMuppet

But it can't save the day.

The problem with Google search is that it indexes all the web, and there's (as you say) a rising tide of scam and spam sites.

The problem with AI is that it scoops up all the web as training data, and there's a rising tide of scam and spam sites.

9 days ago

AtlasBarfed

There's no way the search AI will beat out the spamgen AI.

Tailoring/retraining the main search AI will be so much more expensive that retraining the spam special purpose AIs.

8 days ago

[deleted]
8 days ago

layer8

Without a usable web search index, AI will be in trouble eventually as well. There is no substitute for it.

8 days ago