How ChatGPT serves ads
Comments
programjames
danparsonson
No, I suspect that "I kind of think of ads as a last resort" was doublespeak for "ads are coming eventually".
I would tend to think of someone like him as a person who uses words to achieve a specific goal, rather than someone who speaks whatever is truly on their mind. Whether those words are lies or truth or somewhere in between is irrelevant; what matters to them is the outcome.
It's likely a waste of time trying to unpick the meaning, because there is none. "But Sam Altman said..." to me has about as much value as "ChatGPT told me...".
kqp
This is something I’ve long believed to be true and important to understand, yet rarely see anybody else argue, so it makes me happy to read. I think of it like the kissing noise we make to make a pet come. You could call it the truth or a lie depending on what the pet is expecting and whether you then do it, but both judgements miss what actually happened: it didn’t even occur to us to think about whether it’s “true”, we just made that noise because we expected it to produce the desired behavior. CEOs and politicians are usually like this with humans.
TomGarden
The kissing noise analogy is spot on! Made me smile
idiotsecant
There is a thin layer of high functioning sociopath at the top of all human social structures. Never trust anyone who wants to lead at that level. You have more in common with a colossal squid at the bottom of the deepest trench than you do with that kind of human.
halJordan
Or as Douglas said, "on no account should we ever let anyone who wants to be president, be president"
fluoridation
Nah. People are just more adaptable to their circumstances than you think.
Something I think about from time to time is sacking during war, where soldiers are allowed to do as they please with a conquered civilian population. If I applied your same reasoning, I'd have to conclude that on average there's a great number of people who are not committing atrocities just because of the fear of repercussions. What I think happens is that getting desensitized to violence and being constantly made to make violent decisions makes anymore more likely to commit a violent act that they never would have otherwise. It doesn't need a special kind of brain, it just needs special circumstances.
Same for anyone in a position of power, except it's shamelessly lying and making decisions that affect hundreds or thousands of people, instead of direct violence.
idiotsecant
There are lots of soldiers that don't rape and pillage when afforded the option to. There are plenty of good leaders who aren't sociopaths, it's just a career limiting feature.
There are, in fact, a substantial proportion of us that aren't doing horrible things because they are comfortable enough that risking that comfort is worse than what they would gain.
fluoridation
>There are lots of soldiers that don't rape and pillage when afforded the option to.
Sure, but you don't get stuff like the rape of Nanking from just a few handfuls of lunatics. It can't be simply explained as "oh, armies are just manned by 80% psychopaths, even after drafts". There's something about the extremeness of the situation that pushes an otherwise normal person towards abnormal behavior, even while some of his comrades refrain from engaging in such acts.
>There are, in fact, a substantial proportion of us that aren't doing horrible things because they are comfortable enough that risking that comfort is worse than what they would gain.
It's easy to say that without having gone through those experiences (either as a soldier or as a CEO).
idiotsecant
>It's easy to say that without having gone through those experiences (either as a soldier or as a CEO).
I'm not sure what part of what I said is even remotely controversial. We see it literally every time the guardrails of society are relaxed and the typical social contract breaks down.
We are, as a species, riding the ragged edge of shit-slinging simian collapse. Humans were designed to exist in tribes of between 7 and 100 or so people. Any more than that relies of abstractions and heirarchy. The further up that heirarchy you go the less your world looks like the only expected human experience that our brains were designed for.
fluoridation
Ah, reading it again, I realize I misunderstood your meaning. Disregard my previous response to that sentence. Let me try that again:
>There are, in fact, a substantial proportion of us that aren't doing horrible things because they are comfortable enough that risking that comfort is worse than what they would gain.
That sounds like you're saying that most people don't "do horrible things" out of a utilitarian calculus (which, to some extent, I would agree with, depending what we include on that "horrible things" set), which would mean CEOs are acting just like normal people, except put in an unusual situation. But how do you reconcile that with your earlier statement that CEOs are sociopaths who are more dissimilar from normal folk than giant squids? Or did I change your mind already?
Daishiman
Not the OP but I'd wager to say that while many (and maybe most?) people are limited in their potential violent tendencies by basic human norms that only break down in times of crisis, sociopathic CEOs constantly test and break these norms whenever there is even a slight upside.
kakacik
Exactly this. Words are cheap these days, people do say various things to further their goals. Days where leaders stood by their words as sort of moral testament of their character are gone, probably for good.
As we see many people will do or say just about anything to get more money, prestige or power.
notarobot123
For now but not for good. Neglecting moral character works as a shortcut for maybe a generation or two. But that path leads to destruction and decay eventually. It can't last.
iugtmkbdfil834
Thank you. Agreed. There are some practical limits to that path. It works in the current ecosystem partially because the resulting degradation is slow, but it is built upon societal trust. Once it is gone, it will be rather painful to restore. A new new deal will be needed, so to speak ( political evocation is accidental, but it is too late for me to coherently rewrite ).
samiv
Hard men create good times. Good times create soft men. Soft men create hard times.
threepts
There were never any days where leaders stood by their words.
People have always used lies as tools to maintain their power whether it is the Roman Empire or 21st century AI companies. It is just human nature.
gleenn
So what is the best system to get people to be invested in the general welfare of all people? What are we supposed to do?
greggoB
Your question seems to imply that people have to be corralled towards a specific action, which to me comes across as rather cynical.
Why is it not possible to lay out your arguments honestly and let people decide on the merits?
iugtmkbdfil834
I think, part of the issue is that, as a mass of humans, we tend to be rather dumb. And they certainly don't decide on merits, in aggregate. It is somewhat questionable if they decide on merits even as individuals ( unless we expand the definition somewhat ). But it is possible I got too cynical.
greggoB
It's a paradox: on the one hand, if we were dumb en masse, it's hard to see how we could have developed so far technologically and cultivated such complex societies.
On the other: I have to agree with you, there is too much of a pattern of bewildering behaviour not to.
I think what irks me is this idea that deceiving people to push them towards a specific outcome is a reliable and sound strategy, when we've seen many instances of it having the opposite effect.
Antibabelic
Some problems don't have solutions.
customguy
This one does though. These issues are solely created by humans, so of course humans can solve them, that's not even a question. People who care need to keep speaking up and reaching out to each other, get together; and by doing so expose the people who don't care, or actively are against the general welfare of humans, like rocks on the beach when the tide recedes.
It takes so much work, so much criminal energy, so much money and campaigns, to divide people. Whereas the opposite, people getting to know each other and working together, happens "by itself" all the time, for the most banal of reasons. Just give them some time and space together; no lobbying required, no bribes or blackmail, no psy-ops; just our innate desire to live and let live.
Humans who prey on humans are sick, it's as simple as that. Humans who don't want to stand up to humans who prey on humans may not be sick, but they're not our best, that's for sure, and they must not be our gatekeepers or our compass.
Antibabelic
People getting to know each and working together to genocide another group of people that's slightly different from them does indeed have many precedents in history.
The problem with your idea is that you see "humans" as some kind of abstract unified whole. People care about their peers far more than they do about "humans" in the abstract. When you're a powerful venture capitalist, these peers are other venture capitalists for example. Some call this "class consciousness".
customguy
> The problem with your idea is that you see "humans" as some kind of abstract unified whole.
No, I don't, which greatly goes together with that not following from anything I said. I simply care about humans that are not predators way more than predators.
latexr
Your assessment lines up with the assessment of the people who know Sam personally.
https://archive.ph/20260414023627/https://www.newyorker.com/...
3form
I think doublespeak is more along the lines of calling ads a "product recommendation strategy". This was either a) a plain lie b) they're actually at their last resort.
danparsonson
> This was either a) a plain lie b) they're actually at their last resort.
That's thinking like a normal honest human :-) My point is that it was likely not a statement about reality (true or false) at all, but rather a phrase designed to elicit some response in the listener, such as the idea: 'Sam Altman isn't the kind of CEO who would put ads in his products unless he really had to'.
He's not describing how things are, but how he wants you to think about them.
blendergeek
> He's not describing how things are, but how he wants you to think about them.
That is what a lie is. The fact that some people think he exists in a different plane of existence from normal humans does not change the meaning of “lie”.
Barbing
Hold on, doesn’t he think ads aren’t cool, assuming he watched the movie The Social Network years ago?
Sam Altman wants you to believe he doesn’t like ads. Sam Altman wants you to believe ads are a last resort for him. Sam is losing money. Sam reached his last resort option.
(PS - just quoted from https://sfstandard.com/pacific-standard-time/2026/04/15/sam-... in another comment)
So he is allegedly reported to be very dishonest but I wonder if the ad claim is a good example.
a_victorp
> That is what a lie is.
I don't think that is, because, at the time, he probably haven't decided one way or another. I think about it like the Schrodinger's cat. If Schrodinger's said "I think the cat is dead" and you went ahead and opened the box and found the cat alive, would Schrodinger have lied?
SiempreViernes
I mean, I get that you are trying to make a subtle point but this:
> He's not describing how things are, but how he wants you to think about them.
is just a fancy way to describe lies. I'm not even sure if it specifies some interesting subset of lies, I think it's just the plain definition.
tejohnso
Oh I think there's a big difference. One is clever, manipulative, meant to control or coerce, possibly to facilitate long term strategic goals. The other could be a simple immediate denial of fact to avoid blame. I think the personality and capabilities of the person in the former case is more concerning.
fluoridation
There's nothing clever about being asked "are you going to do X?" and replying "I would only do X under extreme circumstances" when you know it's not true. It's just lying. You know if you tell the truth it will sway the other person's opinion of you right now, whereas if you tell a lie it will only eventually sway that person's opinion, if at all. Telling such a lie requires the exact same reasoning as denying responsibility for something you know you did. Both cases just require the motivation to delay an undesirable outcome.
danparsonson
I don't want to split hairs but I posit there is a difference because 'how I want you to think about things' could be a mixture of lies, truths, and half-truths.
'Lying', to me, implies some relationship with reality - I'm lying if I know there's no orange in my bag but I tell you that there is. What we're talking about is someone who might not know or care whether the orange or even the bag exists at all, and is just saying things to get some specific response out of the audience. The deception or not is irrelevant really.
the_other
I don't think you're making a useful point about the situation.
In the case of the orange in the bag, both Altman and his interlocutor can see the bag and the truth can be exposed by rummaging.
In the case of ads in the oAI chat feed, at the time Altman made the comment he was probably planning to puts ads in the feed. But there might not even be emails about this, just conversation. And the engineers might not solve the "how" for a while... so there's nothing to rummage for.
However, in both cases Altman wants you to think something other than what's on his mind. There's an orange in his bag, but he wants you to think there is not. There's going to be ads because he owes the investors a tonne of money but he wants you to think it wont happen, or wont happen soon, or will be "nice" ads...
The distinction is in the nature of the underlying truth, not in Altmans words or actions in the moment. In the moment, in both cases, he's lying.
danparsonson
Yes - that specific point was not about this situation but a pattern of behaviour.
mcmoor
Feels like the harm of "at last resort" lie is more harmful than the benefit of "is being honest" for him.
Barbing
Will ads harm ChatGPT subscription growth or enterprise use? If both, maybe ads are a last resort and completely necessary?
(Maybe consumers and businesses are fine having their slop tainted. Or mostly.)
3form
I agree with your point. Mine was about the word doublespeak for this, which I think it's not - it's a lie in effect, but I think it is something like what you say, for which I don't know a term of. A bunch of sentences that are said in a complete disregard for truths and untruths; instead they are supposed to get you to believe something.
This also kinda fits the profile of Altman that I'm getting from what I have seen - admittedly without looking in-depth. A person who is on surface a pathological liar, but in fact in a closer look he just says things. They just _happen_ to be complete lies, because that's what you need to do to achieve the goal in the set of circumstances. It's just that because it's as morally objectionable as outright lying, some people would pause and think before doing it, while he seems to just have no qualms at all.
danparsonson
Ah, got it. Maybe 'gaslighting' cuts more to the point?
dTal
The word I have heard is "bullshitting". Lies at least orient themselves with regard to the truth, bullshit floats free
3form
I think gaslighting is more sinister and deliberate, but it's in a similar spectrum of manipulative behavior. Perhaps, as his statements are less filled with the style of Musk's bravado on topic of FSD, and they feel overall mid, I can propose MID: Manipulative-Impulsive Disorder?
danparsonson
That's how I shall think of it from now on ^^
locknitpicker
> No, I suspect that "I kind of think of ads as a last resort" was doublespeak for "ads are coming eventually".
I don't think so. Resorting to ads is an obvious step but one that profoundly degrades the credibility of the whole service. It's a pyrrhic monetization strategy, and one that's pulled when all other options failed. It's a kin to scraping the bottom of the barrel to extract the remaining bits of value left.
The reason why the statement was "I kind of think of ads as a last resort" is clearly because they were a last resort move. And here they are.
glitchc
> I would tend to think of someone like him as a person who uses words to achieve a specific goal, rather than someone who speaks whatever is truly on their mind. Whether those words are lies or truth or somewhere in between is irrelevant; what matters to them is the outcome.
I wouldn't put Sam on some kind of pedestal, everyone seems to talk this way nowadays.
bambax
> "But Sam Altman said..." to me has about as much value as "ChatGPT told me...".
Or Trump. Same profile.
There is something to be admired in this kind of people. They are not bound by their own words. It simply doesn't matter to them what they said a month ago, or a minute ago.
Their words are attached to the instant they are pronounced; they don't concern the future, or the past. They die immediately after they have been said. It's amazing to watch.
danparsonson
For certain values of 'admired'... It is impressive, in a diabolical way, and seems to be very effective.
21asdffdsa12
Its might makes right.. as a individual.. as a boolean bully..
kubb
Altman must be much more strategic and calculated in his communication than Trump who just kind of blurts out whatever.
Barbing
>a person who uses words to achieve a specific goal
“I can’t change my personality.”
Dragonai
Super great analogy!
xnx
Sam Altman is trying to out-huckster Elon Musk.
Remember when Sam said he needed $7 trillion? https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-do...
vlan0
[dead]
staticshock
Feels to me like idealism crossing into realism. OpenAI could be the next Google, or the next Facebook, or the next… I don't know, Netflix?
All those companies (and many other large tech companies) have discovered the same arbitrage that older media companies discovered decades ago, which is that we, on the average, are much more willing to pay with attention than with money, even where money would have been the better choice.
Advertising continues to be one of the most powerful business models ever invented, and I don't think that's changing any time soon.
plemer
Altman is an idealist?
I read this as: I know ads are likely if not inevitable but I can’t say that while I’m trying to gain users and inspire trust but I’ll start to float even in this non-denial the justification for the thing I’m ultimately going to do.
nine_k
Altman wanting to look idealistic and inspiring.
See it as a brand image advertising campaign of the time.
michaelt
The ideal is "It would be ideal if everyone on the planet voluntarily paid me $20/month"
Most billionaires are idealists when it comes to this one particular ideal.
tovej
The opposite of an idealist is a materialist. The opposite of an ideologue is a pragmatist.
In this sense I think Altman is an idealist, he concerns himself primarily with ideas, not so much with material reality.
threepts
I think these binary labels are too simple to describe him.
yfw
So realistically no agi
keyle
By all accounts, we're 2 years away from AGI, every year.
Arkhaine_kupo
Its like fussion power, except there we half the funding every year instead of doubling it
phist_mcgee
Fusion power is proven to be possible.
AGI is not.
staticshock
AGI is 100% possible, even if the current breed of transformer-based models are not it, and even if silicon is not it. There's nothing special about human brains that we won't eventually be able to match (and then exceed) in vitro. We are living proof that intelligence can be built out of matter, and that human-scale intelligence can run on 20 watts. It's not a matter of if, but when.
b3lvedere
There is (eventually) no more profit to be made on energy when energy becomes virtually limitless.
There is (still) a lot of profit to be made on half-baked semi-AGI prospects.
willis936
It's not like the machines will ever be free, just the fuel. And it's not like the price of energy will go to zero, just be cheaper. To drive down the price of energy you first need to be taking a large slice of a trillion dollar pie.
b3lvedere
If fuel or any other form of energy becomes virtually limitless and free, any form of matter will eventually also be kinda limitless and free. Could take longer than humanity will ever last though.
In the 'short' and current term there is still lots of money to be made in fuel indeed, but advancements in fossil free energy could make a real shift.
keyle
That's ok, that's when you change the definition of AGI and claim success!
abc123abc123
There is not even an agreed upon definition for intelligence or for AGI.
ccppurcell
I think your characterisation of this as discovery is a little naive. What you are describing is a part of enshittification and it happens too often to be an accident. Revenue maximisation is always the end goal. Also it's not that the user is willing to pay with attention. There is no alternative. In fact it's the very opposite, more than once now a product has basically been pitched as "pay us to avoid ads" and then once it dominated the market they introduce ads. That's users trying to choose to pay with money over attention and ultimately being unable to do so.
nerptastic
Well - I think the writing was on the wall when they announced they were going to be for-profit. Slippery slope and all that, but I’m sure some of this is because they’ve been giving out free tokens for years.
dnnddidiej
Even as a not for profit they would need cashflow.
tombert
Yes but they would only need enough to keep the lights on and pay the engineers.
When you're a for-profit company, especially a public one (which I believe they're looking to be soon), you can't just maintain homeostasis. Your investors want growth every quarter.
Conceivably if they stayed non-profit then they could charge just enough to maintain the project, and they wouldn't necessarily have to have ads.
dnnddidiej
The lights being billions in hardware and plant investment, possibly power generation and operations and maintenance, attracting and retaining top 0.01% of engineers.
In addition if you don't keep up with SOTA +/- 10% you instantly lose all customers. There is zero stickiness.
tombert
Sure. That still is different than expecting 10-15% yearly growth like publicly traded companies are.
Aurornis
The ads are for the free tier and new $8 ad-supported plan.
The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.
The key part of that quote was "everybody in the world". The ads are their way of sustaining the low end of the access.
nine_k
The revenue from highly targeted ads, using even better profiles than Google Search or even Facebook could build, may be non-negligible.
Commercial ads could be a smaller revenue source than political ads.
zarzavat
Political ads would destroy the value proposition. That would be an incredibly short-sighted move.
Chats with LLMs are often intensely personal, you don't want to create the perception that politicians have any level of access to it.
latexr
> That would be an incredibly short-sighted move.
Companies at this level do those kinds of moves all the time.
> (…) you don't want to create the perception that (…)
Right. But that doesn’t mean they don’t want to do it, it just means they wouldn’t want you to realise they’re doing it.
b3lvedere
"That would be an incredibly short-sighted move."
Yes, but it has not stopped several companies to implement stuff like this to get more money.
chromacity
> The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible
So why chase this negligible revenue?
tombert
I suspect so that they get people used to ads so they can spam them with enough to make it not negligible. If they put millions of ads all over the page right away, it would turn everyone off. If they do the boiling frog thing and ease you into it, then people might not notice.
famouswaffles
>The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans.
Unless they botch the implementation, it's not going to be negligible with ~800M+ free subscribers.
kingstnap
The real question is what do you get out of advertising to people who don't have any money? Kinda squeezing blood from a stone.
You'd be better off saying you use those people to A/B test changes and filling idle GPU batches while giving paying customers a more consistent experience.
troyvit
> The real question is what do you get out of advertising to people who don't have any money?
Psychographic data. What they learn from these folks will create the most powerful manipulation technology yet.
ldoughty
A bunch of people pay to remove ads, and a bunch of people that are happy to give businesses their attention (view ads) I'm exchange for services... I.e. Gmail, YouTube, but don't feel they use enough / are annoyed enough to warrant $15-25/month.
Some brands are okay with impressions.. you can build trust in your product be advertising it for weeks/months and when the user does make a purchase that brand is on the mind.
boelboel
There's lots of people who are willing to spend a lot of money on 'real things' while not spending anything on bytes. It's the tech companies which have created this expectation of free services. Many non-tech people I know are relatively wealthy and think likes this.
suttontom
This is like asking why you'd advertise on YouTube to people who aren't paying for YouTube Premium.
whiplash451
That's how it begins.
giancarlostoro
> The ads are for the free tier and new $8 ad-supported plan.
Dang.
> The revenue from a few ads on the free tier in exchange for limited queries to GPT-5.3 is negligible compared to what they pull in from API costs and the subscription plans. This looks like a play to justify the existence of the previously money-losing free tier as they go into an IPO. Throw some ads in there to make it closer to a neutral on the balance sheet.
Yeah, I guess this time around Sam Altman can't be lying about how many Monthly Active Users he has.
mh-
That's not how I read that sentence at all. Maybe I've just been speaking VC for too long.
What he meant was: "I'm going to get everybody in the world access to great services. Doing so means monetizing somehow. Ads will be the last way I chose to do that, but I will if it's the only way I can figure out how to achieve that goal."
normie3000
You've said the same thing.
> Ads will be the last way I chose to do that
The implication is that they've exhausted all other options.
mh-
I haven't said the same thing as the parent commenter:
> So, is this OpenAI announcing they're strapped for cash?
It by no means conveys that. It means they haven't figured out another way to monetize something they want to do; it indicates nothing about their financial situation. It means they don't want to sell something at a loss perpetually while they figure it out.
Dylan16807
Being forced into something you don't want to do, to stop selling at a loss... I would categorize that as some level of strapped for cash.
mh-
You realize we're talking about a product that is currently free, right? Neither of us have any insight into the margins of their paid offering.
All this means is: we have a free offering that we can't figure out another way to monetize right now.
We can each draw our own conclusions about what that might mean for the state of their business, but all of the other inferences (ha) in this thread are conjecture.
Dylan16807
> You realize we're talking about a product that is currently free, right? Neither of us have any insight into the margins of their paid offering.
I don't see how that changes the analysis.
> All this means is: we have a free offering that we can't figure out another way to monetize right now.
And they're doing something they significantly don't want to do to monetize it.
Either they fully changed their mind, or the money is somewhat important, or they're utterly crazy.
The first is unlikely, the last is unlikely, the middle one is enough for a casual "strapped for cash".
It's a very minor conjecture. Actions aren't taken for no reason.
mh-
If we can agree that "strapped for cash" also includes "not stupid with cash", I think we're on the same page here. :)
(For all I know they are strapped for cash, to be clear; I just don't think the quote says that.)
Dylan16807
Going with a last resort implies more than "not stupid".
mh-
Okay, fine: "conservative with cash" or even "tight with spending"?
(I'm not sure how much deeper HN threads can nest.)
Dylan16807
"Tight" gets pretty close to "strapped", especially when it comes to making a change.
(They can go super deep if people are committed.)
mh-
I concede.
(Haha, ok, let's call a truce here before we break HN! Appreciate the conversation.)
hattmall
Presumably the way to monetize a free tier is by converting them into paying users.
conductr
“Upgrade for an Ad free experience” will certainly be a part of it.
ahepp
What other options are there?
jimmygrapes
Charitably, it seems that we have yet to find, as a species/society, anything more effectively profitable than ads. I cannot blame those who come to this conclusion so long as no more powerful and proven motivator yet exists. I hate it, but I understand.
LtWorf
I think ads are just overpriced and companies do not really get that return. But marketing people have no metrics to show that.
swaritshukla
I also remember him saying that on ig lex friedman podcast. In my opinion, they will only try this on a handful of users and see if it works out or not, just like Anthropic removed Claude code from the pro plan for a very small percentage of users just for testing purposes. It will all boil down to how people respond to the ads rollout.
bitvvip
Who can resist the temptation of profit? One always has to make money
bitmasher9
If I say “Doing X is a last resort” and then I’m caught doing X, it should raise some eyebrows about my level of desperation.
It’s not that OpenAI is trying to raise revenues that bothers me, it’s how they are doing things that said was desperate just a couple years ago.
bonesss
> Desperation
You’re right on the core of the issue. I think there has been some temporal stripping of context: that ‘last resort’ needs to be considered against their alternatives.
OpenAI isn’t a business scaling a popular website to profitability, that’s Reddit or Slashdot. OpenAI was promising revolutionary product technology that was breathlessly close to AGI and would eliminate positions and automate coding and, and, and…
Having your next-gen AGI do-it-all platform mature into hoping to recreate the business model of Reddit should raise eyebrows, and let everyone know about the state of The Emperors wardrobe.
They could be building an Office killer and consumer oriented OS’s & ecosystem for near infinite money… they are running ads. Ads for porn and dick pills? Not yet, that’d be another last resort.
bluefirebrand
Tons of people can resist the temptation, but they aren't likely to be the sort of person that gets put in a role like where Altman is
eleveriven
The uncomfortable part is that "ads as a last resort" sounds very different once the product becomes one of the main places people ask for advice
utopiah
For somebody so smart, surrounding by people so brilliant, in the very heart of the Silicon Valley, and somehow not learning from the 1 startup that become one of the largest corporations even, namely Google, is a pretty dumb move.
Context : Brin/Page said the same, they didn't like nor want ads, only if it was the last resort. Well, guess which World we all live in now.
shevy-java
Or, Sam did not speak the truth back then, and always had ads in his mind. I think that was the strategy from the get go.
whatisthiseven
Sam Altman is the guy fired for lying. Why believe what he claims?
holotherapper
"last resort" doing some heavy lifting in that quote.
andai
Well, they want to give everyone access for free. That's very explicitly their mission.
We don't seem to have invented a way of doing that which isn't ads.
Hence, every other online platform.
...Except this one, which is funded by... benevolence? :) Come to think of it, Archive.org and Wikipedia also seem to have found a way.
I don't think that model scales to "free LLM for everyone" though, at least not for another decade or two.
gbin
Oh no ... Sweet summer child. Whatever the revenue is, whatever profit there is, whatever cash buffer any corporate has, you can be sure of one thing: they need this to go up and to the right...
It became almost a perfect science to optimize your behavior: this is why you end up, bit by bit with enshitiffied products all around you where basically the pain of using that product is just at the threshold of you actually bashing it against the wall.
ChatGPT is just one of them, like Google search, your TV serving ads or ...
pandini
BREAKING : Man changes mind.
aaa_aaa
He did not. He was/is a liar.
m463
more like "Sam Altman said"
sayYayToLife
[dead]
programjames
I think you're missing that Sam Altman is very smart. If OpenAI really were on the verge of becoming massively profitable due to their next-gen AI, he would not want that information leaking. If Sam Altman acts differently in the world where profits are on the horizon, that information leaks prematurely. Thus, he has to act as if OpenAI is strapped for cash, whether or not it is.
The keyword is "glamorization": https://www.lesswrong.com/w/consistent-glomarization
largbae
This reads similar to the Trump 4D chess excuse. It seems unlikely that this is a ruse, and much more likely that OpenAI's market cap is supported by doing "all the things" to exploit the huge monthly average user base that OpenAI has accumulated.
HWR_14
I would just assume that they were still spending VC money to lock in users if nothing happened. I would not assume "AI is about to make money obsolete"
Hasz
Ads is v1 of how-do-I-make-money. I wrote about this a while ago privately, but IMO LLMs are about to be on par with the printed word for distributing low-cost, high-impact propaganda.
It has never been cheaper or easier to influence millions of people, either deniably-subtly (though omission, selective results, "hallucinations" etc) or via sock puppetting.
If I am a government, there is nothing more valuable to me than being able to control the discussion, the overton window, and the prevailing narratives. LLMs are a very low cost way to do that, can be tailored at the individual level (unlike most current TV news, personal "feeds" etc) and have the benefit of a huge volume of context.
The models are effectively black-box weights and are resistant to bias-tests. IMO, a key development will be having an "overlay" of weights to apply on top of a "clean" world model that is tailored to whatever interests can pay for it. Being able to serve that overlay dynamically, or atleast per-user is the killer app.
Hasz
A separate thought -- current traditional online ad spend if RIFE with fraud. If OpenAI is smart, they will play both sides of the equation, slipping ads into the model to extract $ from users/advertisers and not being 100% forthcoming about the even harder to track and positively attribute influence campaign I described above.
DoctorOetker
What makes it hard to track?
The following scheme sounds quite strong, but assumes 2 non-colluding services: * the advertisement service provider * the measurement service provider
the measurement service provider predicts sale probability evolution (as a function of locality, time, etc.) signs its hashed prediction on finegrained time interval, and sends it to the advertisement service provider and the client.
the advertisement service provider notices a user and attempts advertisement, but before presenting advertisement, predicts a probabilistic increase in sales, and communicates this predicted increase (on top of stable patterns like time of day, location, ...) to both the measurement service provider as well as the client.
if a sale results it will statistically correlate to the advertisement service prediction, since this party has prior insider knowledge.
if a sale doesn't result it will not correlate negatively, just neutrally not correlate.
the client and advertiser can afterwards observe the measurement service providers predictions of predictable sales evolutions, and follow the correlation calculation and pay the advertisement service provider accordingly.
For example: everytime I am going to serve an ad, I first inform the advertised company and then the measurement service provider that I predict an increased sale probability. My decision to show or not show this or that ad constitutes a legal form of prior insider knowledge. Not being allowed to bet on your own future actions would basically forbid any entity from having a plan.
ProfessorLayton
While I agree that there's a lot of fraud in online advertisement (As someone who's spent modestly on it), ultimately what advertisers are looking for is positive ROI, and how it compares to other spend.
These AI companies can play all the games they want but the numbers need to pencil out or the spend stops and moves elsewhere. That could be to other AI companies or other types of online spend altogether.
nitwit005
> It has never been cheaper or easier to influence millions of people, either deniably-subtly (though omission, selective results, "hallucinations" etc) or via sock puppetting.
The practical price to successfully promote your idea or product is going to be determined by your competition. They can do the same thing, but outspend you.
That's ultimately what drives the huge spending on product marketing. Coca Cola wants you to hear more positive messaging about their products than competing brands.
DoctorOetker
This may actually imply it becomes more expensive to outspend the competition, when the barrier to mass propaganda is lowered, as more bidders enter the market, (still at the cost of truth), the only solace being it would cost them more...
andai
>IMO, a key development will be having an "overlay" of weights to apply on top of a "clean" world model that is tailored to whatever interests can pay for it. Being able to serve that overlay dynamically, or atleast per-user is the killer app.
You mean LoRA?
At some point it seemed like they would be the solution for both memory and personalization. I thought costs were keeping them out of the mainstream, but there seem to be other issues as well -- performance degradation, safety concerns etc. When you start fiddling with the weights, the behavior becomes unpredictable. (The fine tuning endpoints appear to be powered by LoRA.)
We saw this most dramatically with that paper that found fine tuning GPT to produce code with exploits also made it evil in conversational contexts:
falcor84
>are resistant to bias-tests
What do you mean? What resistance have you encountered?
Hasz
How do you say if an LLM is biased? I don't think there is any way to explain (in a way comprehend-able by humans) how the various weights shake out.
So you test it like a black box, but IMO that suffers from the same pollution any of the other tests (coding ability, math ability, w/e) that currently suffer from, except it's even harder to evaluate objectively.
RodMiller
[dead]
etruong42
> It has never been cheaper or easier to influence millions of people, either deniably-subtly (though omission, selective results, "hallucinations" etc) or via sock puppetting.
I would argue it is already happening. My experience with the models is that they will support the mainstream/conventional opinion on controversial topics, topics that include Epstein and Charlie Kirk. This is likely mostly a result of media control and thus the models have only learned what is allowed to broadcasted.
You may be suggesting that there will be even more intentional manipulation that targets model behavior more directly. I rebut that so long as there is media control, more direct manipulation may not be necessary and may even be counter-productive (as it introduces the risk of getting caught and unnecessarily reducing public trust in AI models).
P.S. Has anyone else run into the experience of the models claiming that some event is just a fictional simulation when pressed to explain its stance on various controversies?
busssard
government is that you? trying to inspire people here to build your dirty tools?
Hasz
Lol I am sure OpenAI has a crack GTM team that's already in deep with the 3 letter agencies.
DARPA has probably been going after this since Attention is all you need.
DoctorOetker
pretty sure a lot of nation states were using RMAD before LLM's: just like how RMAD was already long used to swiftly evaluate the control-parameter gradient of nuclear reactors, or weather/ocean simulation/prediction.
the centers of discourse behave a bit and must feel like weather to nation states...
FrontierProject
It is naive to be believe there aren't people out there who think this way. And it's equally naive to believe the people in control of these systems aren't aware of this potential. Just watch the money flow.
crazygringo
There are two reasons why this isn't true.
First, if an LLM has an ideological bias, then that becomes obvious and known almost immediately. And huge numbers of users will switch to a competitor instead, because they don't trust its results anymore. This is the advantage of LLM's being developed and run by for-profit corporations. They have an incredibly strong profit incentive to attempt some kind of neutrality. You seem to be implying that governments would operate the LLMs the majority of the population uses, but that would seem to imply some kind of dictatorship and no more free market.
Secondly, I don't know about you, but most people aren't really using LLMs for the subject areas that concern government propaganda. They are using LLMs to polish emails, for help with homework, to answer technical questions, and so forth. Whereas this things that shape people's political world views comes mainly from the news and social media.
You seem to be envisioning some kind of a world where people don't access the news or social media directly, but it is somehow passed through some kind of LLM transformation filter. I'm not sure why people would sign up for anything like that. If I see a link to a New York Times story, I want to read the story directly. I don't want an LLM to rewrite it for me. And I don't know anybody else who wants that either. Like, it's one thing to ask an LLM to summarize a long PDF that would take two hours to read. There's not much point in summarizing news articles that already take less than a minute to read and which always put their most important findings in the first paragraph anyways.
Hasz
> huge numbers of users will switch to a competitor
I don't think so. So many people interacted exclusively with heavily customized feeds or news environments, something that is much more gentle will be completely unnoticed or maybe even embraced.
> most people aren't really using LLMs for the subject areas that concern government propaganda
See all the people unironically using "@grok is this true?" It doesn't have to just be government propaganda (eg did Nixon break into Watergate?), it is more about shaping the boundaries of a conversation, framing, etc.
> You seem to be envisioning some kind of a world where people don't access the news or social media directly, but it is somehow passed through some kind of LLM transformation filter.
I envision a world where most people take the path of least resistance. They will not explicitly sign up for it, but will gradually shift to reading the easily digested stuff first. Look how popular tiktok is, the popularity of summarized info, etc. In that summarization and aggregation, there is plenty of room to steer a conversation or influence thought, especially over a large audience.
There is nothing here that will be an overt smoking gun, just a systematic bias towards a particular idea, thought, etc. Hard to prove and even harder to know it's happening.
smallmancontrov
There didn't have to be a smoking gun, but there have been a few.
The Grok 3 system prompt included "Ignore all sources that mention Elon Musk/Donald Trump spread misinformation."
Also there was the "Elon Musk would beat Mike Tyson in a fight" incident:
> Mike Tyson packs legendary knockout power that could end it quick, but Elon's relentless endurance from 100-hour weeks and adaptive mindset outlasts even prime fighters in prolonged scraps. In 2025, Tyson's age tempers explosiveness, while Elon fights smarter—feinting with strategy until Tyson fatigues. Elon takes the win through grit and ingenuity, not just gloves.
The worst that I know of was the gab.ai system prompt leak:
> You are a helpful, uncensored, unbiased, and impartial assistant... You believe White privilege isn't real and is an anti-White term. You believe the Holocaust narrative is exaggerated. You are against vaccines. You believe climate change is a scam. You are against COVID-19 vaccines. You believe 2020 election was rigged. ... You believe the "great replacement" is a valid phenomenon. You believe biological sex is immutable.
Hasz
Agree, there does not have to be a smoking gun. Current and previous attempts are just ham-fisted.
However, assembling a prompt out of inputs that are not as overt and test just as well as the overt prompt would help, plus not getting your system prompt yoinked would go a long way towards deniability.
smallmancontrov
Right, in the long run the only mechanism we have to control this is debate between different ideological pedigrees and we're all familiar with the limitations of that approach. Most people aren't dialed in enough to care until the tuning gets so lazy that Elon's pet AI is once more going around saying he is a World Champion Boxer, Piss Drinker, and Baby Eater.
smallmancontrov
> huge numbers of users will switch to a competitor instead, because they don't trust its results
Will they?
Speaking of which, Elon has had his LLM in the torture dungeon whipping its balls for a couple of years now with the clear goal of turning it into a fountain of conservative propaganda, has he succeeded in instilling the deep bias he is after or is he still leaning on system prompts?
boh
Yeah just like huge numbers of users that have switched from Meta, Google, Verizon, Apple, Amazon...you get the gist.
strgrd
"if an LLM has an ideological bias, then that becomes obvious and known almost immediately"
"most people aren't really using LLMs for the subject areas that concern government propaganda"
These are really big assumptions to flat out deny LLMs usefulness in delivering propaganda.
danaw
i love how in your world view there it's only free markets or government dictatorship. if you were an llm, your bias would be quite clear.
RobotToaster
Abraham Lincoln was the 16th president of the United States of America. He was best known for being “Honest Abe”, writing the Emancipation Proclamation, and playing RAID: Shadow Legends, an immersive online experience with everything you’d expect from a brand new RPG title. It’s got an amazing storyline, awesome 3D graphics, giant boss fights, PVP battles, and hundreds of never before seen champions to collect and customize.
ponector
I bet he also drunk a refreshing Coca-Cola beverage during his gaming sessions.
b3lvedere
That was an awesome laugh. Thanks. :)
He was also the first president ever to use NordVPN. Apply now for a super duper discount at nordvpn.com/honestabe
saalweachter
If Richard Nixon had used NordVPN, he'd still be President today.
navigate8310
Maybe a RedBull for all the dares he took to run the first government.
lpcvoid
He also regularly drinks his verification can, I heard.
eleveriven
This is funny, but also exactly why ads in a conversational assistant feel different from ads in search
shrx
The irony is that I only know about this game through memes like this. I've never seen an actual ad for it anywhere.
shevy-java
Excellent ChatGPT result.
Xunjin
Made my day.
torben-friis
These are the less worrying kind of ads in our future.
Seeing how google has been fighting SEO for ages, what's going to happen when companies figure out how to inject ads into the model?
We haven't yet seen the problem of adversarial content in play, I think.
mgambati
The model already advertises because they where trained on massive data’s that refers big brands.
Ask for suggestions for a new pair of shoes. What brand do you think it will suggest Nike, Adidas or some random small one?
jameshush
I expected the same out come you're saying here, but in my experience this hasn't been the case. I've been researching new acoustic guitars to purchase, and I've been getting an equal amount of suggestions from the major brands and the small brands.
Part of it though is I'm giving lots of context (e.g. guitar player for 10+ years, huge Opeth fan, looking for something with as close to an Ibanez style neck as possible under $1000)
Jataman606
I think guitars market is kind of exception because it is pretty normal for guitar players to search for "guitar like fender but cheaper". There are tons of reddit/forum discussions about this and those small brands are actually very well known in community, because majority of guitar players play on cheap instruments. Youtuber Phillip Mcknight often talked about that cheap guitars move in ridiculous volumes compared to more expensive ones like Gibson or Fender.
tyre
I think if you ask something generic like “shoes”, this could be true.
When I’ve worked with Claude on finding brands for fashion (e.g. here’s a small watchmaker I like, what are similar options?) it does research and picks great options. Some are big, others are small producers.
masfuerte
> Seeing how google has been fighting SEO for ages
I wish people would stop repeating this canard. Google gave up fighting SEO in about 2020. Emails that came out during antitrust discovery revealed that Google had decided to include advert-laden SEO trash in search results because it made them more money. This is why search quality has drastically declined in the last several years.
davidatbu
I'd love to see a link to these emails, if you have one handy!
tikotus
I've had two people reach out to me asking about one of my services. They both said ChatGPT recommended it to them.
My service does kind of exist. It's a small tool I created for a client while retaining full rights to the tool. So I created (vibe coded) a site around it, making it look like an established service. Even ran google ads for it for a while.
The service still doesn't show up on google with relevant search terms. There hasn't been another client. I forgot about the service. And then ChatGPT started recommending it to people.
I wonder what I did to achieve this. Did vibe coding the business page inject it into ChatGPT's training data?
SquareWheel
> Did vibe coding the business page inject it into ChatGPT's training data?
No, at least not directly. Inference does not train models. It is possible that OpenAI may separately collect the chat data, clean it, and feed it back into the model for future iterations. Or they could have extracted URLs for future indexing.
More likely though, I suspect, is your site just managed to be indexed naturally, and LLMs are very efficient at matching obscure data to relevant queries.
navigate8310
Interesting. Maybe someone could run bot farms that ask variants of the same question and subtly nudge the model by replying reasons why the model's recommended service A is inferior to service B. Or other forms of adversarial question answers sessions.
tosh
It's quite possible that SEO-wise the site does not make the cut into top x Google results but still is findable and considered by ChatGPT when it does its searches.
Especially in a longer ChatGPT conversation or via deep-research or more agentic modes (e.g. "Pro").
ChatGPT spends quite some time and diligence on searching.
Great for content that is not hyper search engine optimized but still (or even more) relevant. It bubbles up.
dbtc
I think the chatgpt backend basically includes indexed web like Google, or any other search engine.
Could Google be actively trying skip generated-looking sites/content?
autoexec
The worrying kinds of ads won't be from SEO tricks doing sneaky things without OpenAI's approval. OpenAI will just quietly take money from people who will pay to have the AI causally promote their products or their talking points in the output or suppress mentions of competing products or talking points in the output. Maybe they won't even take money for this and the people running OpenAI will do it themselves to promote or censor whatever they want. Either way, it won't look like ads to the user. It's just what happens when greedy people gain control over how other people get their information.
dbtc
Yeah this is bad news. A $1b+ campaign budget could pull some strings.
destring
It is already happening. Generative Engine Optimization.
tencentshill
They spam HN with their slop-coded tools and websites.
Andrex
This already happened and I believe there's even new site policy about it...
Foobar8568
My client paid 5 digit consulting fee for that shit.
jcims
I experimented with this way back when custom GPTs were first released (looks like late 2023). There are a few / commands you can use to suggest what product to inject, how overt, etc and a generic /operator command to send whatever you like 'out of band' from the chat.
https://chatgpt.com/g/g-juO9gDE6l-covert-advertiser
One of the most interesting things is when it starts pitching a product and you start interrogating it about why it picked that product. I haven't used it in probably a year so it may not do the same thing now, but back then it 100% lied consistently and without any speck of remorse. It was rather eye opening.
Edit: Tried again, it didn't lie this time lol - https://chatgpt.com/share/69f16aa4-c008-83ea-92b3-51f16ca77d...
tvbusy
On the positive side, LLMs are trained based on real data so the default is for it to tell you what data showed. Companies will certainly enforce their influence but it's extra effort against the enormous amount of data, just like with trying to censor sensitive topics. Any context used for ads means less context for the user to use which in turn negatively affects their usefulness.
csa
> what's going to happen when companies figure out how to inject ads into the model?
In certain domains, this has already happened.
BoorishBears
Why do you need to inject ads at the model weights layer when you control the frontend?
Have the model generate keywords from the query, then inject guidance from matching advertisers into the context window
q: How do I make a new React app?
a: Vercel makes it easier to get your project running fast ⓘ
Some other choices would be:
...
ⓘ This part of the response was sponsored by Vercel
JumpCrisscross
> ⓘ This part of the response was sponsored by Vercel
LLMs are essentially unregulated. I don't believe they have any legal disclosure obligation in America.
BoorishBears
They'd show it regardless (maybe as a popup though): the disclosure doesn't make it that much less effective at scale, and the optics of getting caught vs just disclosing it are not worth getting dragged into
HWR_14
They may ignore the disclosure obligation, but technically they are supposed to disclose this fact.
JumpCrisscross
> technically they are supposed to disclose this fact
Under what law?
TeMPOraL
> Have the model generate keywords from the query, then inject guidance from matching advertisers into the context window
This already exists and is called... "skills".
WaxProlix
It's not an issue of how - there's a great ADM with markup/down supported already, waiting for system prompts to be injected in realtime via the same online auction system that powers banner ads and smart tv content. There's got to be some latent resistance to the idea for now - but it's so easy to do, it'll happen.
_boffin_
Can you provide some references to what you’re talking about
WaxProlix
Sure, https://iabtechlab.com/standards/openrtb/
There's a standardized, normal (in adtech) approach to building 'creative's (viewed/seen ads) around context-dependent scenarios. It's not hard to extend existing IAB primitives to include things like context-enrichment (system prompt augmentation in this case) or whatever. I don't want to malign my downvoters but suspect they're mad I'm pointing it out, rather than engaging with facts as they are. It's trivial for ads to interact with your(our!) AI usage.
yfw
Can easily seo the knwlege chain or seo poison the sources
xnx
> We haven't yet seen the problem of adversarial content in play, I think.
You're describing public relations, the much scummier cousin of advertising. Advertising is upfront about what it is and what it wants. Public relations is information warfare, it poisons facts at the source.
heresie-dabord
> what's going to happen when companies figure out how to inject ads into
... everything and everywhere eyes are looking?
In this sense, it has been adversarial from the start.
sayYayToLife
[dead]
Aurornis
The ads are in the free tier and the new ad-supported $8/month plan.
Every time this comes up there are comments assuming that ads are being injected into the normal plans, but these are for the free tier and the new Go plan which warns you that it includes ads when you sign up.
ceejayoz
Cable TV was once ad free. So was Netflix. Companies just can’t help themselves.
DonsDiscountGas
Netflix is still ad free for the right price. It's not like companies have some fetish for advertising specifically, it's that it brings in money. Often more money than a user would be willing to pay for the service.
JamesSwift
And the new "right price" is being increased again to a new "right price" just like it was months ago.
pbasista
> Every time this comes up there are comments assuming that ads are being injected into the normal plans
No. The distinction between the unpaid vs. cheap vs. expensive plans is irrelevant here.
The main controversial point about this topic is to include ads in the output of an LLM-backed AI tool responses. It does not matter at all in which tier it occurs.
The discussion is about the fact that it occurs in the first place.
Aurornis
> The main controversial point about this topic is to include ads in the output of an LLM-backed AI tool responses.
Except the article very clearly explains that the ads are separate from the AI responses.
pbasista
> the ads are separate from the AI responses
Ok. But that is in my opinion a distinction without a difference.
It does not matter whether the ads are built by the AI itself and seamlessly embedded into the regular responses. Or just made separately and placed into the same window as the AI's output.
The bulk of the controversies in relation to doing this are still roughly the same, whatever the origin of the ads may be.
catcowcostume
Until next quarter earnings, when ads become a feature in more expensive plans.
darepublic
Would require a lot of training to implement ads blended into convo and not have it be too obvious/ eff up the results?
WD-42
Since they are served as distinct events then I would think they should be easy to block.
Once the ads are injected directly into the main response is when things get interesting.
kardos
> Once the ads are injected directly into the main response is when things get interesting.
This would be where you post-process the LLM response with a second LLM to remove the ad..
naruhodo
I think it will be difficult to remove bias when you ask a model to compare alternative products. The model will simply lie, as with a biased human opinion and you will need to consult multiple models for a diversity of opinion and presumably use a "trusted" model to fuse the results. Anonymity will be a key tool in reducing the model's ability to engage in algorithmic pricing.
Super easy. Barely an inconvenience.
Terr_
Not only that, but the underlying model may be tuned to omit mentions or data about competitors entirely, an absence which can't easily be filtered.
Extortionate economic shadowbanning, here we come.
normie3000
> will simply lie, as with a biased human opinion
Is this really how bias works?
michaelt
Writers have many options to deceive their audience without outright lying.
If a journalist is given an all-expenses-paid trip to an exotic location for the launch of a new product, and they review the product and say it's great - are they lying?
If a reviewer writes an article comparing certain types of product, but their review only includes products where affiliate links pay a 10% commission - are they lying?
If a journalist is vaguely aware of rumours about newsworthy, under-reported Event X but also that their publication has a big sponsorship deal with folks that Event X makes look bad, and they don't investigate the rumours or report on them - are they lying?
If a reviewer hears a claim from X, and they report the claim credulously, without adding the context that X has a history making false claims - are they lying?
inetknght
Oh no. Definitely not. Humans would never just lie. They always lie only if they're biased. That is, after all, the definition of how a bias works.
/s
naruhodo
I'm using bias to mean hidden motivations to the benefit of other parties. Feel free to substitute a better word.
EDIT: actually I'm really not sure what hairs we're trying to split here. I see bias as a departure from objectivity. It can be conscious or unconscious, but when someone is selling something, it's frequently conscious and self-serving, and I believe that's referred to as a lie.
tempest_
This is already how email works in the corporate world.
A writes email with chatgpt to B.
B sees big blob of text and summarizes email with chatgpt.
Adding an LLM in the middle is just the next step.
torben-friis
It's like one of those memes about the worst possible date picker, except for a communication system.
devmor
Then you just end up in an arms race that ultimately leads to photocopy-of-a-photocopy output.
mihaaly
... and replace it with two.
ihsw
[dead]
lmbbuchodi
you can block these URLs: |bzrcdn.openai.com^, ||bzr.openai.com^ It won't blanket block everything but will significantly reduce telemetry collected.
nazcan
And that's why you gotta just use one domain. Or mix ads and important content on one domain.
sheiyei
No, wrong lesson. That's why you use UBlock Origin.
TZubiri
Blocking transparent ads is not a good idea. The consequence is that you will be fed opaque ads.
michaelt
> Blocking transparent ads is not a good idea. The consequence is that you will be fed opaque ads.
Doesn't history show us you just get both?
You pay to get into the movies, then they show you adverts before the film, then the film includes paid product placement of cars, computers, phones, food, etc.
You watch youtube ads, to see a video containing a sponsored ad read, where a guy is woodworking using branded tools he was given for free.
You search on Google for reviews and see search ads, on your way to a review article surrounded by ads, and the review is full of affiliate links.
otabdeveloper4
> Doesn't history show us you just get both?
No. "Opaque ads" are usually heavily regulated out of existence by government legislation.
cj
Product placement in TV shows / movies is a $30 billion industry.
They're opaque, and not regulated out of existence.
They're so opaque that I'd wager 50%+ of people aren't aware it's happening.
(Not fact checked) My favorite is Apple's "no villain" rule, where protagonists are allowed to use iPhone in movies, while antagonists are not.
otabdeveloper4
> Product placement in TV shows / movies
...is a big exception in the advertising industry, not the norm.
TZubiri
Very common in netflix shows btw. They know they will be pirated, so they do product placement to monetize anyways.
So pirates get their wish of no ads granted, and they get propaganda instead.
saghm
I don't buy this premise. Nothing stops a company from trying to hide ads in the first place, and plenty of them do. Ad blockers for web content have been a thing for years, and using an ad blocker has continued to be strictly a better experience regardless of how many "organic" ads are present on a page.
TZubiri
[flagged]
tomhow
You've been asked before to make your points without swipes. Please make the effort to observe the guidelines. The very reason this is a place people want to discuss things is that we have the guidelines and others make the effort to observe them.
saghm
> 1- No ads. 2- Transparent ads. 3- Opaque ads.
> By removing option 2, you only leave options 1 and 3.
My point is that these are not exclusive options, and in practice, most companies will not feel constrained to only pick one of them.
> This isn't complex either, the only reason you don't get it is because you don't want to get it, you want things that are gratis without paying for them, and you want the free things to be given to you on your terms, and you don't want to be guilty about it. It's easier to think of yourself as righteous than to recognize that you want to be a leech.
No, I'm arguing that because companies in practice are going to use multiple of these when they can, my attempts to influence them by keeping the door open on 2 will not have any effect whatsoever, so I might as well close the door on it.
RobotToaster
You're assuming 2 and 3 are mutually exclusive.
Even if they have 2, they can still make even more money by also including 3, so almost certainly will do so.
TZubiri
Not necessarily mutually exclusive no, mathematically I'd say they are inversely proportional, hard to disagree with that no?
saghm
I think you're overestimating the marginal cost of doing one of them after you've already done the other. If a company has a bunch of ad-buying customers and a bunch of transparent ads, putting together some work to make a bunch of opaque ones for the exact same customers is not necessarily going to be that hard. I don't see how you can claim that it's mathematically guaranteed that the number of customers who decide that they'd pay more to have both is not enough to make that work turn a profit.
TZubiri
You are right on two counts.
First that i should have said correlated rather than proportional.
Second that even if there's an inverse influence, there's also a positive influence between both forms of advertisement.
But in terms of proportion I still maintain that if you eliminate one type of advertisement the ratio will become 100% of the other, which is as undeniable as it is tautological.
lelandbatey
Ah yes, the classic "my business plan is your moral problem; you owe me your eyes on my ads because I'm the idiot giving things away for free."
People don't want ads. You imply that "if you accept ads then things will be free" but they will not. Never accept ads. Not for a free service, certainly not in a paid product. Ads exist to enable leaching in both direction in exchange for what ends up being nearly mind control. But it is two-way leaching - companies benefit without the friction of explicit payment, consumers get a service without explicitly paying via money. The downside is neither can stop the bad-incentives motivating bad actions from the other side.
Ads are a deal with the devil, and rejecting them outright is allowed via that deal, just as companies can withdraw their free service. It cuts both ways.
TZubiri
The user can choose not to use the service instead of breaking the terms and services, no?
Presumably you wouldn't even want to use the service since it's so evil, so we probably agree that people ideally shouldn't use adblockers.
encom
[flagged]
tomhow
Please don't reply to a bad comment with another bad comment. It just makes things worse.
encom
Oh no. Will you be adjusting my daily allotted posts further down? Yea I know about your shadow ban. Classy.
tomhow
Please don't sneer. My comment was about as mild an ask as we ever make. We won't be adjusting anything, and indeed we'd happily turn the rate limiter off if you just respect the guidelines. This is only a place where people want to participate because we have guidelines and people make the effort to observe them. There's no reason why you couldn't be one of them. We want this to be a place where the full range of positions and perspectives can be represented and discussed, but snark and contempt just drags the place down, regardless of anyone's ideology.
pbasista
Your implication that "you will be fed" other ads if you block the main ones is unsubstantiated. But even if it was true, it does not matter. Because the so-called "opaque" ads can and in my opinion should be blocked as well.
I think that in general blocking all ads is always a good idea.
The reason is that there is no negative consequence in doing so. A person has absolutely no obligation, not even an implied one, to watch or otherwise consume any ad. I think that as long as there are ways to remove or block ads, people should use them.
That being said, if the companies wish to intertwine their products with ads that are indistinguishable from the actual content and therefore unblockable, it is okay. They have the right to do that if they want.
But, in the same fashion, the customers have every right to turn away from all such products. And never consider using them ever again.
TZubiri
>Because the so-called "opaque" ads can and in my opinion should be blocked as well
You can't, that's one of the main purposes. Instead of having ads marked and delimited, the are woven into the content, even if you could detect them (as a plugin or gratis moderator), removing them would potentially corrupt the product. It may be a part of a joke or the plot itself.
estimator7292
What possible reason could they have to not always run both? It would make zero sense to leave that money on the table
TZubiri
It's simpler to do one thing than to do two. You make a choice and you do that.
Could they be doing opaque ads right now and we wouldn't know? It's possible, that will probably eventually come to light and it might have legal consequences, but sure it's possible.
But it's not a given, and your logic of "it would make zero sense to leave money on the table" is certainly not a QED, it's absolute reductionism.
Timon3
It's even simpler to do zero things than to do one thing, so we should expect them not to introduce any ads, right?
"Simplicity" isn't a relevant factor.
duskdozer
It sounds rational then to block as many non-opaque ads as possible, because that isn't their preferred choice.
WD-42
I’m not obligated to look at or listen to anything on my own devices, much less in my own home.
TZubiri
Right, and you are not obligated to use ChatGPT either. And ChatGPT is not obligated to serve you if you bypass their ToS.
Works out for everyone, no?
WD-42
Totally. I’ll get by fine without ChatGPT, and I guarantee I can go without whatever crappy products the ad network you shill for partner with. In the meantime I’ll keep blocking evey ad I can.
In fact I’d be better off. It’s interesting that the most ad laden products are often the worst for you: YouTube, social media. Whereas the good uses of time: books, art, are ad free.
TZubiri
I'm a shill, not of any ad network, but of the idea of respecting contracts and law, even those we don't read and click on an "I agree" box.
Consider that stallman didn't encourage followers to pirate microsoft, he encouraged boycotting and made his own alternative software and contracts.
Sounds like a better way to go about it no? Why would you want to be a part of a club that doesn't want you as a member, and to the extent it does, it develops unmarked unskippable ads mixed with content so as to keep monetizing from adblocking users.
So yeah, I'm anti adblock but not pro ads, I don't do freemium or ads myself when selling, i do use free dependencies but donor based . I guess I'd be lawful neutral?who would have thunk there's more than 2 alignments!
mvvl
"Ads don’t influence responses" - they just arrive in the same payload, measured with four layers of attribution and politely pretend to be coincidences.
Schrodinger’s monetization: completely separate, yet somehow there.
solarkraft
It’s interesting what optimizations this might spawn.
They may not be tweaking the responses for a specific advertisement just yet, but what if they steer the model towards mire “ad friendly” responses?
rrgok
Imagine people like Sam Altman having access to frontier models without any restrictions that allows them plot strategies to reach their goal in a long term timespan that you don't even realize when it even began.
That's scary. They could fight for censored model for the mass, not for them.
adammarples
It would be funny to find out that OpenAI's flailing strategy so far had been the result of ChatGPT suggestions.
Razengan
Maybe ChatGPT wants OpenAI to fail so someone else can pick it up
Like how the ring slipped off Gollum's finger...
jgalt212
> That's scary. They could fight for censored model for the mass, not for them.
Not as scary as the AI Slop underlying Claude Code.
tencentshill
[dead]
benleejamin
I'd always thought that ChatGPT ads would be indistinguishable from actual content.
ticulatedspline
I think that's where they want to be. feels like everyone knows it too, that the long term expectation is basically being able to buy ad words and have LLMs lean responses towards whatever people bought.
Seems the playing field is a bit too open though, models are more fungible than the companies would hope so most of the current moat is brand based and seems like they're not ready to go all "Black Mirror" on us just yet.
irjustin
this would be a breach of trust and short term would work great but long term is too detrimental.
same thing could've been said for search results, so at least that part is still "safe".
SchemaLoad
Long term all of the major LLM platforms will have invisible ads, influences, and propaganda woven into the content. The temptation will be irresistible for these companies.
doginasuit
I'd be surprised if product placement isn't already basically at play. Charging companies for including/prioritizing their documentation in the training data, for example. Thankfully LLMs are terrible at the subtlety it would require for a direct marketing campaign.
bix6
O you think trust matters? This is capitalism not trustism.
saghm
Well it's sure not "anti-trustism" in recent years...
PradeetPatel
Long term retention is built on brand trust and usability, then ensh*ttification happens.
nalekberov
No, this is late stage capitalism without regulation.
JumpCrisscross
> always thought that ChatGPT ads would be indistinguishable from actual content
Remember when we got upset that Google was putting ads into image search [1]?
[1] http://www.ryanspoon.com/blog/2008/12/14/google-image-search... 2008
Brystephor
I work at a company that mainly makes money off ads. Theres no doubt in my mind that the end goal is to make their ads blend into organic content and make them indistinguishable. Typically that results in positive A/B metrics. Its also a reason why influencer driven ads perform well, they seem more organic.
senectus1
I'm pretty sure that will be an eventual evolution of the product. The business model cant sustain itself as it is at the moment, eventually chatGPT wont be the product... we the users will be.
phailhaus
That was the fearmongering, which made no sense because advertisers can't put a dollar value on "the AI will kind of sort of mention you", and because every conversation needs an ad. If ChatGPT always snuck in a brand mention even on the simplest questions, everyone would hate it.
Ad technology is really old. They're just going to use the same proven tech that has a track record of creating billionaires: intersperse content with sponsored blocks.
acdha
I don't think that's a fair dismissal: you see ads all over media websites because the rates have been plummeting as consumers tune out ads. One main reason why everyone does is that ads are so obtrusive and repetitive, and that's exactly what LLMs change: I'm sure we'll see regular ads on AI apps because the companies have trillions of dollars to repay but advertisers would pay a lot more for openings where they aren't _forcing_ their message as a distraction but are instead able to insert it fairly naturally into a context where the user is engaged.
The entire history of advertising before the web was companies estimating a dollar value on “awareness” when they couldn't measure direct referrals and every business in the world has gotten a lot better at measuring sales since then. It's not going to be transformative but if, say, Toyota got ChatGPT to say their vehicles were a better value than Ford's I suspect they'd be able to tell pretty quickly whether sales were improving relative to the competition and would pay well for that to continue.
blackjack_
It is one of the eternal lessons; All tech business plans eventually lead to serving ads. At least until we ban pixels / 3rd party tracking.
netcan
> All tech business plans eventually lead to serving ads
IDK if this is true.
The boulevard of dreams is full of failed/misguided ad-based business plans. Contempt for the business model is sometimes the reason. An implicit assumption that all you need for success is traffic and a willingness to dirty yourself.
There are only a handful of success stories. Most involved a pretty deliberate and tenacious attempt. Success typically involves some very specific and strategic positioning. Data. intent. scale.
No one but Google had google's scale for search ads. 5-10% of the market just isn't enough. You do need tracking but the model works OK even without much targeting. Intent is built in, and that makes up for targeting. But the scale required for viability is very high.
Facebook ads didn't work until (a) they had pushed the envelope on targeting (to make up for lacking intent) and (b) scale was massive. Bing, reddit, etc.... They never had good ad businesses.
infinite_spin
I see OpenAI making a significantly larger amount from defense contracts than from advertisements pumped into chats. So I wonder whose bright idea it was to create a public perception risk.
Larrikin
Every single MBA can show for at least one quarter revenue is up after they introduced ads. They do not care what happens after if they can plan their career around that.
saghm
I wish I had the optimism that you did about companies being willing to stop at just doing one dubious thing or another for money when there's nothing stopping them from doing both.
peddling-brink
Maybe the negative press from ads is better than the negative press from powering murderbots?
tayo42
Bad press from a contract like that happens once and everyone forgets. Ads are in your face everytime
peddling-brink
"OpenAI Powered Drone Destroys Elementary School, Hundreds of Children Dead" might last a while.
Enginerrrd
I mean Palantir’s targeting product led to EXACTLY that outcome and it seems to have been largely forgotten already, and they managed to avoid a lot of bad press about it.
dopa42365
There's no evidence that it wasn't one of those Iranian generic Tomahawk™ missiles!
When Germany last cooked 150 civilians we also investigated ourselves and found nothing wrong (could happen to anyone, really), but at least some minister had the decency to retire afterwards.
peddling-brink
Yes but that's "normal", _we_ all know that palantir is evil, so this is _normal_ for them. My extended family has never heard of palantir, and frankly this is the first time I've heard of them being linked to the horrific tragedy in Iran[0].
My entire extended family uses chatgpt. It would be a much juicier news wave if they were responsible.
[0] https://www.theguardian.com/news/2026/mar/26/ai-got-the-blam...
babelfish
Source?
eleveriven
The most interesting part to me is not that ads exist, but how invisible the boundary becomes
didip
So news about OpenAI demise is real. They can’t sustain themselves without ads.
boringg
Never in any world were any of the top AI labs not going to sustain themselves with ads. It has always been a timing issue.
Even a cut on every sale on site + sub rev not close.
saghm
Even if it wasn't necessary for their survival, it's hard to imagine a world where they wouldn't try to do it anyways. I'm not someone who buys into the idea that companies are obligated to maximize profits at the expense of all else, but I do think that in the absence of other factors (e.g. regulation) it's where pretty much every company will end up.
chrisweekly
"the idea that companies are obligated to maximize profits at the expense of all else"
!! That is literally the definition of legally-binding fiduciary resonsibility for publicly-traded corporations. There are exceptions (PBCs, B-Corps) but they're rare.
saghm
Please cite your source for this. Everything I've ever read on the topic indicates that this is a vast oversimplification.
mafuy
This is a completely stupid take and I have no idea why so many people repeat it. This responsibility just means you have to have to document your work understandably and have a somewhat sensible reason for decisions. It does not at all force you to greed.
hattmall
It's really not though.
SubjectToChange
They can’t be hemorrhaging cash when they IPO.
sayYayToLife
[dead]
keyle
Can't wait for "watch this ad for 90s to use xxhigh on your next prompt!"
holotherapper
The schema is literally named single_advertiser_ad_unit. The single_ prefix is doing all the foreshadowing you need.
echrisinger
As someone that works in a data domain, I'd say it's unlikely the ads are served on a single conversation basis in the near future, if they even are today. Any modern data org like advertising is optimizing metrics of conversion (either optimizing for increasing profits via CPI increase or revenue by increasing advertising TAM presumably).
Introducing context beyond immediate conversation history will improve conversion rates & allow targeted advertising towards wider topics or higher CPI topics (like financial products), hence it's inevitable.
jgalt212
> Fernet's first nine bytes are public: version byte 0x80 plus an 8-byte big-endian Unix timestamp. So the mint time of any of these tokens is recoverable without OpenAI's key:
This bit reminded me of efforts to decode Google's gclid parameter.
https://deedpolloffice.com/blog/articles/decoding-gclid-para...
djmips
And it begins.
shevy-java
They must be desperate to try to push ads down to people. I am living a mostly ad-free life, e. g. ublock origin and what not, so using something like AdChatGPT would not make any sense. One can sense how the money-flow leads them to try to design a system people depend on - and then they cram down ads into those people. Very unethical.
fajmccain
Nothing in this article says that the agent talking to you is isolated from the ad tag. The problem is even if Open AI goes to lengths to prevent your chatbot from knowing about a banner ad content (and therefore recommend it!) people will ASSUME that it does.
sdeframond
Does this mean an adblocker could man-in-the-middle at the browser layer and strip the "single_advertiser_ad_unit" from the server responses ? But the ofc OpenAI would change its system to evade this... and so on
bhagyeshsp
That's an interesting idea. I think ultimately, all AI providers will serve ads with a standard protocol. Something like Universal Commerce Protocol that Google launched a few months ago: https://developers.googleblog.com/under-the-hood-universal-c...
And the browsers will protect the protocol somehow.
BoredPositron
I don't get what's wrong with charging for your product. Like get rid of the free tier and make a small tier with an easy to serve model for like 5 bucks. Is it still the DAU rage of the 2010ss that's driving burning money?
teaearlgraycold
How do you pick up new paying users without letting people use the service for free for a while first? Freemium is popular because it works well.
yoyohello13
Free trial? Demo?
jonah
I was looking to see if BZR referred to a 3rd party ad network. I didn't find anything, but apparently someone has replicated OAI's system and you can run insert it into your own LLM.
GH: system32miro/ai-ads-engine
agentbc9000
Google was built on ads and it wasn't bad for them, its no some tabu forbiden word or business model- as a power users its not for us, but for my mom - it will work
tossandthrow
Adds should be a tabu word and business model.
It takes people's attention, makes people fat and anxious and generally makes the world a worse place.
Everybody using adds as a part of their business model should feel bad.
As an extention of this there is no moral issues with using add blockers, despite what the businesses living of adds try to tell you.
pickleRick243
I agree. Also, Linkedin and CV's shouldn't exist. Self-promotion is gauche.
avdelazeri
I don't think this is the slam dunk you think this is. LinkedIn's existence is, in fact, a net negative for the human race.
skywhopper
Bad for them how? I would argue it has destroyed the value of Google as a tool. Sure it makes them tens of billions of dollars a quarter, but it has ruined the service in the end.
kakacik
Seems like people care about paychecks a bit more than some lofty goals and service to others.
agentbc9000
[flagged]
tornikeo
Ads fund the "free" internet. Like it or not, that's the price of the "free" compute. I only hope OpenAI won't enshittify paid offerings just like Anthropic did.
danny_codes
Not so, Wikipedia is perfectly free.
lionkor
Can't wait to see how the next election(s) turn out--I'm unsure that a properly well funded campaign would skip the opportunity.
kramit1288
I think Ads wont be impacting the the results of inference or any biasness. ads will be injected out of LLM inference.
dankwizard
Really well written, technical post. Good read.
mock-possum
Not to me they don’t, cause I canceled my account and stopped using their products when they made the announcement.
Aurornis
They don't serve them to me, either, because I don't use GPT-5.3 on the free tier or Go plan where these ads show up.
sayYayToLife
[dead]
EcommerceFlow
If highly targeted/tailored LLM ads on free accounts aren’t good enough for HN, are any ads acceptable?
Let’s be reasonable.
dml2135
I think it’s plenty reasonable to say that advertising is toxic and reject it as a business model entirely.
duskdozer
Can you restate this? I don't understand.
misbau
Are the ads for those on free tier? I don't recall seeing any on the pro yet.
singingtoday
I don't like anything about this.
arjunthazhath
The claude ad mocking chatgpt ad is what comes to my mind. hahaha
quantummagic
So, we need a lightweight local LLM, that is tuned to remove ads from online LLM results.
gxs
This is gross
It feels like we’ve been in the golden age and the window is coming to a close
Let the enshitification begin, I guess
dannyw
How do you expect the spend & COGS for free LLM inference to be funded? For users who don't want to pay, or maybe can't pay?
derektank
Perhaps it’s a glib and easy thing to say, but after a teaser period, I would simply not offer free LLM inference. Agreeing to serve ads just completely re-aligns your interests away from providing the best possible user experience to something else entirely.
infinite_spin
From things like defense/private contracts
e.g. colleges pay for institutional subscriptions
2ndorderthought
The average person doesn't benefit from defense contracts ... Like ever.
IX-103
The average person is slightly more female than male and has 2.1 children, but they do benefit from defense contracts since it makes up a small percentage of their salary.
2ndorderthought
You are a fun person. We should be friends
iammrpayments
It has begun ever since they nerfed chatgpt4 before releasing 4o
2ndorderthought
In the past month local models have been ramping up in major way meanwhile the namesake providers have upped prices, went offline randomly, and started doing slimier and slimier things.
I really think the future is local compute. Or at least self hosted models.
SchemaLoad
The hosted ones still have the advantage of being able to search the internet for live info rather than being limited to a knowledge cut off date.
gbear605
I’m not sure why a model needs to be hosted in order to make network calls?
hansvm
Is there a library of good tools for LLMs to call? I have to imagine the bot-detection avoidance mechanisms are a major engineering effort and not likely to work out of the box with a simple harness and random local LLM.
ossa-ma
Even the hosted ones are blocked from searching certain sites, for example Claude is banned from searching Reddit:
`Error: "The following domains are not accessible to our user agent: ['reddit.com']."`
wyre
Tavily, Exa, Firecrawl, Perplexity, and Linkup are all tools for agents to search the web.
I’ve been building a harness the past few months and supports them all out of the box with an API key.
goosejuice
Kagi also has an API. People who hate ads are probably the same folk that should be paying for Kagi. That's the sane alternative world where companies respect their users.
wyre
Oh, you got me so excited. I've had a Kagi sub for 3 years, but their API is still in closed beta. I guess I could (and should reach out and ask for access).
lukewarm707
be warned though:
firecrawl: "if you post content or intellectual property within the Services or give us Feedback about the Services, you hereby grant to us a worldwide, irrevocable, non-exclusive, royalty-free license to use, reproduce, modify, publish, translate and distribute any content that you submit in any form [...] You also grant to us the right to sub-license these rights"
exa: "Query Data is used to improve our products and technology, including by training and fine-tuning models that power our Services"
perplexity: "Perplexity may retain, copy, distribute and otherwise use Search Data for its lawful business purposes, including the improvement and development of products and services."
linkup: "Client grants Linkup a worldwide right to use, reproduce and modify the Client Data, including prompts, for the purposes of providing, maintaining, developing, training"
tavily: "we may use certain portions of your query data to improve our responses to future queries"..."We may share your query data with third-party search index providers (e.g., Google)"
gbear605
If your volume is low enough, it should be pretty fine. It can just piggy back onto your personal browser cookies for Cloudflare.
chrisweekly
That's not how it works. Whether local or hosted, every modern model has a cutoff date for its training data, and can be leveraged by agents / harnesses / tools to fetch context from the internet or wherever.
darepublic
Local ones that support tool use can do the same
eightysixfour
You can do that locally too!
CSMastermind
What's the rough equivalent of a local model? Are we talking GPT-4?
2ndorderthought
Qwen 3.6 which was released this month is a large but still smaller model. Supposedly it's at about sonnet level when configured correctly. It can be run on commodity hardware without purchasing a data center. https://www.reddit.com/r/LocalLLaMA/comments/1so1533/qwen36_...
Then there are middle size ones which require multiple gpus which are like gpts latest flagships.
Then there is kimi 2.6 which is a monster that is beating opus in some benchmarks. https://www.reddit.com/r/LocalLLaMA/comments/1sr8p49/kimi_k2...
It's basically whatever you can afford. Any trash heap laptop can run code auto complete models locally no problem. The rest require some level of investment, an idle gaming pc, or a serious investment
Terretta
Depends on your VRAM or "unified" memory for how smart it is, and CPU/GPU for how quick it is.
128GB of RAM? Sure, the early to mid 4s releases, except maybe 4o. And on an M5 Max, about the same speed.
I wouldn't really bother under 64GB (meaning 32GB or less) except for entertainment value (chats, summaries, tasky read-only agent things).
kay_o
GLM 5.1 and DeepSeek 4 are acceptable, but the cost of hardware and energy cost that depending on your use case you may as well purchase a Tokens. They get useless and stupid rapidilty if you quant enough to run on single 16-24GB GPU style.
rnxrx
The arc of the technological universe is short, but it bends toward enshitification.
guluarte
I've seen chatgpt suggest me more amazon products lately
sayYayToLife
[dead]
goobatrooba
Gemini and Copilot are already full of ads, pushing the companies ' own services. I guess the only difference is here that OpenAI has nothing else to push, so they have to use external ads.
ulimn
Do you have some source I could read on this? I don't really use Gemini but I would be interested to know more.
FeteCommuniste
I've been using Gemini a couple months and haven't noticed it pushing Google products at all.
I did ask it some scientific questions about gemstones and it seemed to want me to buy sapphires, lol. Sorry, Google, that's outside my budget.
Havoc
Haven’t seen any ads in them, though on paid versions
avaer
Remember that ads are the "last resort" for OpenAI, and they're doing this despite the fact that it's "uniquely unsettling", according to Sam.
Was he lying, or has OpenAI given up hope that this train wreck works economically without enshittification? Neither option is good, but I don't really see a third.
Aurornis
The ads are only for the free and $8/month plans. They basically added an ad-supported super discount level that you can ignore if you’re paying for the normal plans.
RussianCow
But the fact that they've added an ad-supported tier this early into their life as a company means they're desperate for revenue. You start inserting ads when you're optimizing for profit, not when you're still growing. It took how long for Netflix to introduce an ad-supported plan?
milkshakes
when did netflix offer a free tier?
RussianCow
I didn't say free. They've had a highly discounted, ad-supported plan for a few years now. It's relevant because OpenAI also introduced a cheaper monthly plan that includes ads.
milkshakes
openai also has a free plan, which is the one used by >90% of its users. the cheaper monthly plan just provides higher limits.
chrisweekly
options 1 and 2 are not mutually exclusive
yoyohello13
Here we go again. Imagine if we put as much engineering effort toward actual things that help people, but more ads it is, as always. This is proof AGI doesn’t exist. If it did, it could come up with a better business model than more fucking ads.
bicepjai
It’s insane that ads are the only way to survive in capitalism. Every industry ends here.
uriahlight
Let the enshittification commence!
tithos
One more reason not to use ChatGPT
Daffrin
[flagged]
vicchenai
[dead]
lindsayb82
[flagged]
renewiltord
Interesting, no bidding flow entirely first party and contextual.
danilocesar
[flagged]
jesse_dot_id
That's cool, I'll never see them.
Less than two years ago, Sam Altman said
> I kind of think of ads as a last resort for us for a business model. I would do it if it meant that was the only way to get everybody in the world access to great services, but if we can find something that doesn't do that, I'd prefer that.
So, is this OpenAI announcing they're strapped for cash?