Apple Research unearthed forgotten AI technique and using it to generate images

82 points
1/21/1970
3 days ago
by celias

Comments


bitpush

I find it fascinating that Apple-centric media sites are stretching so much to position the company in the AI race. The title is meant to say that Apple found something unique that other people missed, when the simplest explanation is they started working on this a while back (2021 paper, afterall) and just released it.

A more accurate headline would be - Apple starting to create images using 4 year old techniques.

21 minutes ago

kelseyfrog

Forgotten from like 2021? NVAE[1] was a great paper but maybe four years is long enough to be forgotten in the AI space? shrug

1. NVAE: A Deep Hierarchical Variational Autoencoder https://arxiv.org/pdf/2007.03898

2 hours ago

bbminner

Right, it is bizzare to read that someone "unearthed a forgotten AI technique" that you happened to have worked with/on when it was still hot - when did I become a fossil? :D

Also, if we're being nitpicky, diffusion model inference has been proven equivalent to (and is often used as) a particular NF so.. shrug

2 hours ago

nabla9

They are both variational inference, but Normalizing Flow (NF) is not VAE.

an hour ago

imoverclocked

It’s pretty great that despite having large data centers capable of doing this kind of computation, Apple continues to make things work locally. I think there is a lot of value in being able to hold the entirety of a product in hand.

2 hours ago

xnx

Google has a family of local models too! https://ai.google.dev/gemma/docs

2 hours ago

coliveira

It's very convenient for Apple to do this: less expenses on costly AI chips, and more excuses to ask customers to buy their latest hardware.

an hour ago

nine_k

Users have to pay for the compute somehow. Maybe by paying for models run in datacenters. Maybe paying for hardware that's capable enough to run models locally.

41 minutes ago

Bootvis

I can upgrade to a bigger LLM I use through an API with one click. If it runs on my device device I need to buy a new phone.

4 minutes ago

b0a04gl

flows make sense here not just for size but cuz they're fully invertible and deterministic. imagine running same gen on 3 iphones, same output. means apple can kinda ensure same input gives same output across devices, chips, runs. no weird variance or sampling noise. good for caching, testing, user trust all that. fits apple's whole determinism dna and more of predictable gen at scale

22 minutes ago

MBCook

I wonder if it’s noticeably faster or slower than the common way on the same set of hardware.

2 hours ago

rfv6723

Apple AI team keeps going against the bitter lesson and focusing on small on-device models.

Let's see how this would turn out in longterm.

3 hours ago

peepeepoopoo137

"""The bitter lesson""" is how you get the current swath of massively unprofitable AI companies that are competing with each other over who can lose money faster.

2 hours ago

furyofantares

I can't tell if you're perpetuating the myth that these companies are losing money on their paid offerings, or just overestimating how much money they lose on their free offerings.

an hour ago

sipjca

somewhat hard to say how the cards fall when the cost of 'intelligence' is coming down 1000x year over year while at the same time compute continues to scale. the bet should be made on both sides probably

2 hours ago

furyofantares

10x year over year, not 1000x, right? The 1000x is from this 10x observation having held for 3 years.

an hour ago

echelon

Edge compute would be clutch, but Apple feels a decade too early.

2 hours ago

nextaccountic

This subject is fascinating and the article is informative, but I wish that HN had a button like "flag", but specific for articles that seems written by AI (well at least the section "How STARFlow compares with OpenAI’s 4o image generator" sounds like it)

2 hours ago

Veen

It reads like the work of a professional writer who uses a handful of variant sentence structures and conventions to quickly write an article. That’s what professional writers are trained to do.

20 minutes ago

CharlesW

FWIW, you can always report any HN quality concerns to hn@ycombinator.com and it'll be reviewed promptly and fairly (IMO).

an hour ago
×
Sample One
Sample One