Ask HN: What will happen as AI costs increase?

11 points
1/21/1970
16 hours ago
by MetaWhirledPeas

Comments


Tony_Delco

One thing I rarely see discussed is that AI cost is not just dollars per token.

There’s also latency, dependency on external infrastructure, privacy and compliance concerns, energy usage, and just the general predictability of the system itself.

My guess is that this will gradually push a lot of companies toward more hybrid architectures over time. Small or local models are probably good enough for things like filtering, routing or repetitive high volume tasks, while frontier models get reserved for the places where the quality jump actually justifies the added cost and complexity.

As useful as frontier models are, using them for absolutely everything sometimes reminds me of using a distributed system for problems that could have been solved locally with something much simpler.

I wouldn’t be surprised if, in many real world cases, a fast specialized system plus a smaller model ends up being the more practical and economical setup overall.

4 hours ago

markus_zhang

I think it’s going to be like infrastructure —- eventually they will reach certain level, maybe like electricity.

2 hours ago

ipaddr

Less people will use the frontline models and those who do will pay more. Progress will slow. OpenAI will sell your chat data. You will get an AI tax. Companies will use less of it.

Hopefully new ways to deliver similiar quality will be discovered.

Stock market will pop.

Prices will go up for people inside the moat

13 hours ago

scorpioxy

What always happens. A market correction followed by going back to a reasonable state, until the next bubble of course.

In my opinion, LLMs are useful for many things but not anything and everything and definitely not in the way the boosters are claiming. This is not a popular opinion when you are inside the bubble or have something to gain by it. So when there there's a downturn, things will hopefully stabilize with LLMs being another tool that can be used to automate certain things. It feels crazy saying this these days and have been told I'm out of touch if I think this way and who knows, maybe that's true.

15 hours ago

kaant

Sometimes I do wonder about this. Some companies might get people used to AI first and then raise prices later, which could put many of us in a difficult position. But I also think Linux came out in a similar kind of environment, and in the end the community will find a way through it.

8 hours ago

CM30

For a lot of companies, probably shut down or drastically limit their AI usage due to rising costs. A small or medium sized business dependent on ever growing AI expenses is in a real bad position, and could well go under.

I heard a few companies ended up going back to hiring actual employees for work that was previous done by LLMs, so there's a chance we could see some more of that too. Might also see a few try to make it work with outdated or local ones too.

15 hours ago

MehdiBelkacem

Token anxiety is real. What worked for me: prompt caching on fixed system prompts cut my Anthropic bill by ~60% overnight. Most devs don't realize cache writes are 25x cheaper than input tokens on Claude.

Local models for classification/routing + frontier only for generation is the other move — but the latency tradeoff is real if you're in a user-facing flow.

11 hours ago

OccamsMirror

How did you do that?

10 hours ago

atleastoptimal

Prices are going down. Just look at open source models, you can run the equivalent to a SOTA model 8 months ago on your laptop.

14 hours ago

B_Nemade

most people will stop paying for the frontier models and will look out for the small models which are optimised on certain tasks

9 hours ago

krapp

What do you think will happen? How does supply and demand work? Practically every business and government in existence is existentially dependent on AI, speculation on it is the only thing keeping the world from global financial collapse. It's "too big to fail" at a scale that dwarfs the financial crisis of 2008.

You'll pay the fucking danegeld is what you'll do, and keep paying it, because you reorganized your entire existence around and mortgaged your future on a closed proprietary third party service's business model that is now a single point of failure for our entire technological civilization, making its market value practically infinite.

That's a collective "you" there, by the way, not "you" personally.

15 hours ago

scorpioxy

Isn't it strange? You'd think there were some lessons learned from the 2008 crisis but apparently not. It is not that long ago to be forgotten already.

15 hours ago

andrei_says_

The lesson is that if you’re too big to fail no laws apply to you and there unlimited money to be made.

It has been learned very well.

The brazen violation of intellectual property was a precondition of making this technology useful. Taking the risk of breaking the law at this unprecedented scale was an informed decision made based on this very lesson.

15 hours ago

atleastoptimal

AI model prices are getting cheaper over time, per amount of capability.

13 hours ago

yulaow

opus 4.6 -> 4.7 did not respect this assumption

5 hours ago