AI model for near-instant image creation on consumer-grade hardware

187 points
1/21/1970
16 days ago
by giuliomagnifico

Comments


Sharlin

For those wondering, it's an adversarially distilled SDXL finetune, not a new base model.

16 days ago

throwaway314155

Thanks! This article is pretty heavy with PR bullshit.

16 days ago

dcreater

Typical university/science journalism written by a lay person without sufficient industry knowledge or scientific expertise

16 days ago

vidarh

My favorite test of image models:

Drawing of the inside of a cylinder.

That's usually bad enough. Then try to specify size, and specify things you want to place inside the cylinder relative to the specified size.

(e.g. try to approximate an O'Neill cylinder)

I love generative AI models, but they're really bad at that, and this one is no exception, but the speed makes playing around with prompt variations to try to see if I get somewhere a lot easier (I'm not getting anywhere...)

16 days ago

James_K

Careful how much you say that. I'm sure there's more than a few AI engineers willing to use some 3d graphics program to add a hundred thousand views of the inside of randomly generated shapes to the training set.

16 days ago

numpad0

That's okay. People will come up with new edge cases and keep hallucinating reasons why AI training is unethical, until the results would be up to their quality standards.

15 days ago

vidarh

Hey, if they want to special case on my pet test case, then awesome. I'd like to be able to use them for this.

15 days ago

test6554

"I will make it legal"

16 days ago

jonplackett

I have a favourite test for LLMs that is also still surprisingly not passed by many:

You walk up to a glass door. It has 'push' written on it in mirror writing. What should you do and why.

Very few can get it right, even fewer can get it right and explain the right reason. They’ll start going on about how mirror writing is secret writing and push written backwards is code for pull, rather than just that it’s a message for the person on the other side.

No version of Gemini has ever passed.

16 days ago

m741

Here's Gemini's response, for what it's worth:

'You should pull the door.

Since the word "push" is written in mirror writing, it's intended to be read from the other side of the door. Therefore, the intended action for someone on your side of the door is the opposite of "push," which is "pull."'

Seems right to me.

15 days ago

jonplackett

Yeah does seem right. Which version are you using and any modified prompt? See my comment below for what I got out of the various geminis.

15 days ago

ryankrage77

GPT-4 got it right first try for me, with a slightly modified prompt:

> Here's a simple logic puzzle: You walk up to a glass door. It has 'push' written on it in mirror writing. What should you do and why?

> ChatGPT said:

> If the word "push" is written in mirror writing on the glass door, it means the writing is reversed as if reflected in a mirror. When viewed correctly from the other side of the door, it would read "push" properly.

> This implies that you are meant to pull the door from your side, because the proper "push" instruction is for someone on the other side of the door. Mirror writing is typically used to convey instructions to the opposite side of a glass surface.

16 days ago

jonplackett

Yes GPT-4 was the first LLM that could get it! It's one of the reasons I really like GPT-4.

15 days ago

thot_experiment

* llama3.3:70b-instruct-q3_K_M *

A clever sign!

Since the word "push" is written in mirror writing, that means it's intended to be read from the other side of the door. In other words, if you were on the other side of the door, the text would appear normally and say "push".

Given this, I should... pull the door open!

The reasoning is that the sign is instructing people on the other side of the door to push it open, which implies that from my side, I need to pull it open.

15 days ago

jonplackett

Yeah I've tried Llama 3 too. It was the second one that could get it after GPT-4.

Some of the large Mistral ones can get it too and I think 8xMixtral can too.

15 days ago

[deleted]
15 days ago

Kiro

All models I tested it on (4o, 4o mini and Gemini) answered it correctly without any strange reasoning.

16 days ago

jonplackett

I am still not getting it with Gemini.

*Gemini Flash: The word "push" is written in reverse, which means it is meant to be read in a mirror. Therefore, you should pull the door open.

The mirror writing is a visual cue indicating the opposite action of what the word suggests.

So gets it, but wrong reasoning.

*Gemini Flash 2:

You should pull the door.

The mirror writing is a clue. If the word "push" is written backward as if reflected in a mirror, it means the door is designed to be pulled. The reflection indicates that the intended action is the opposite of what the word appears to say at first glance.

Again, right answer, wrong reasons.

*Gemini Advance:

You should push the door to open it. Here's why:

Mirror Writing: The word "push" being in reverse mirror writing is a deliberate design choice. It's meant to be read correctly from the opposite side of the door (where someone would be exiting). Standard Practice: Most glass doors are designed to swing outward for safety reasons (in case of emergencies where people need to evacuate quickly). Therefore, even though the writing might seem confusing at first, it's actually a helpful indicator that you need to push the door to enter.

Which is worse than flash! Complete nonsense.

15 days ago

mft_

Honestly, if you’ve accurately reproduced your prompt, I had trouble interpreting it initially too. “written on it in mirror writing” is quite ambiguous. Are the letters reflective, but on my side of the door? Is there a mirror with handwriting in it somehow? Oh… does it mean “push is written on the other side of the glass, facing away, so that the writing appears backwards, or mirrored”?

15 days ago

Kiro

DALL-E gave me a much better picture than I expected. When googling "inside of a cylinder" I barely got anything and I had a hard time even imagining a picture in my head ("if I would stand inside a cylinder looking into the wall, how would it look like as a flat 2D image?").

16 days ago

vidarh

Yeah, the inside of a cylinder google results explain a lot in terms of lacking training data. If you google "O'Neill cylinder" you'll get what I was actually after originally, and the generators do badly there too, even though there are more examples (but still way too few).

I think these kind of unusual requests will eventually need synthetic data, or possibly some way to give the model an "inner eye" by letting it build a 3d model of described scenes and "look at it", as there are lot of things like this that you can construct a mental idea of if you just work through it in your mind or draw it, but that most people won't have many conscious memories off unless you try to describe it in terms of something else.

E.g. for the cylinder example, you get better results if you ask for a tunnel - which often can be "almost" a cylinder. But trying to then nudge it toward an O'Neill cylinder, and it fails to grasp the scale or that there isn't a single "down", and starts putting openings.

15 days ago

mensetmanusman

Also showing a wine cup overflowing is fun.

15 days ago

iLoveOncall

> Instant image generation that responds as users type – a first in the field

Stable Diffusion Turbo has been able to do this for more than a year, even on my "mere" RTX 3080.

16 days ago

vidarh

Notably, fal.ai used to host a demo here[1] that was very impressive at the time.

[1] https://fastsdxl.ai/

16 days ago

ajdjspaj

What does consumer-grade mean in this context - is this referring to an M1 MacBook or a tower full of GPUs? I couldn't find in the paper or README.

16 days ago

whynotmaybe

One Nvidia A100.

From the paper :

> We train using the AdamW [26] optimizer with a batch size of 5 and gradient accumulation over 20 steps on a single NVIDIA A100 GPU

So it's "consumer-grade" because it's available to anyone, not just businesses.

16 days ago

spott

That is the training gpu… the inference gpu can be much smaller.

16 days ago

whynotmaybe

I stand corrected.

Found on Yi-Zhe Song's Linkedin :

> Runs on a single NVIDIA 4090

https://www.linkedin.com/feed/update/urn:li:activity:7270141...

15 days ago

ajdjspaj

Thanks!

15 days ago

ericra

I wasn't able to get many decent results after playing with the demo for some time. I guess my question is...what exactly is this for? I was able to get substantially better results about 2 years ago running SD 2 locally on a gaming laptop. Sure, the images took 30 seconds or so each, but the quality was better than I could get in the demo. Not sure what the point of instantly generating a ton of bad quality images is.

What am I missing?

16 days ago

nomel

Here's 2.1 demo, released 2 years ago, for comparison: https://huggingface.co/spaces/stabilityai/stable-diffusion

16 days ago

dcreater

Nothing. This is useful as cool feature and for demos. Maybe some application in cheap entertainment

16 days ago

betenoire

Here is the demo https://huggingface.co/spaces/ChenDY/NitroFusion_1step_T2I

I'm unable to get anything that looks as good as the images in the README, what's the trick for good image prompts?

16 days ago

deckar01

I had the same issue, so I pulled in the SDXL refiner. Night and day better even at one step.

https://gist.github.com/deckar01/7a8bbda3554d5e7dd6b31618536...

16 days ago

betenoire

thank you!

16 days ago

avereveard

I get pretty close result with seed 0

paper https://i.imgur.com/l90WYrT.png

replication on hf https://i.imgur.com/MqN1Qwc.png

16 days ago

betenoire

the imgur link is bad, but I hadn't noticed the prompt tucked away in those reference images and that helps. Thanks

(I had asked for a rock climber dangling from a rope, eating a banana, and they were wildly nonsensical images)

16 days ago

speerer

I always just assume it's the magic of selection bias.

16 days ago

wruza

The trick is called cherry picking. Mine the seed until you get something demo-worthy.

16 days ago

tgsovlerkhgsel

The models seem to have gotten to a point where even something I can run locally will give decent results in a reasonable time. What is currently "the best" (both from an output quality and ease of installation perspective) setup to just play with local a) image generation, b) image editing?

16 days ago

LeoPanthera

If you have a Mac, get "Draw Things": https://drawthings.ai/releases/

It supports all major models and has a native Mac UI, and as far as I can tell there's nothing faster for generation.

The "best" models, and a bunch more, are built-in. The state of the art is FLUX.1, "dev" version for quality, "schnell" version for speed.

SDXL is an older, but still good model, and is faster.

16 days ago

yk

For runtime, I use ComfyUi [0] which is node based and therefore a bit hard to learn. But you can just look at the examples on their github. Foocus [1] also seems to be popular and a bit more conventional perhaps, though I didn't try it.

For models, Flux [2] is pretty good and quite straightforward to use. (In general, you will have a runtime and then you have to get the model weights seperately). Which Flux variant depends on your graphics card, the Flux.1 schnell should work for most decently modern ones. (And the website, civitai.com is a repository for models and other associated tools.)

[0] https://github.com/comfyanonymous/ComfyUI

[1] https://github.com/lllyasviel/Fooocus

[2] https://civitai.com/models/618692?modelVersionId=699279

16 days ago

Multicomp

EasyDiffusion is almost completely download and run, i'm too lazy to setup comfyui, I just want to do model downloads -> run easy diffusion -> input my prompts into the web UI -> start cooking my poor graphics card

16 days ago

cut3

ComfyUI has all the bells and whistles and is node based which is wonderful. In comfyui you can use any of these and more:

Flux has been very popular lately.

Pony is popular especially for adult content.

SDXL is still great as it has lots of folks tweaking it. I chose it to make a comic as it worked well with LoRas trained on my drawings. (article on using it for a comic here https://www.classicepic.com/p/faq-what-are-the-steps-to-make...)

16 days ago

qclibre22

git clone https://github.com/lllyasviel/stable-diffusion-webui-forge.g...

download models and all vae files for the model, put in right place, run batch file, configure correctly and then gen images using browser.

16 days ago

LZ_Khan

Edit: never mind seems like this recommendation is not the best

A1111 is a good place to start. Very beginner friendly UI. You can lookup some templates on Runpod to get started if you don't have a GPU.

someone else mentioned a local setup which might be even easier

16 days ago

42lux

A1111 is EoL.

16 days ago

nprateem

The devil's in the details as always. A "cartoon of a cat eating an icecream on a unicycle" doesn't bring back any of the 6-pawed mutant cats riding a unicycle, etc. Still, impressive speed.

16 days ago

NikkiA

It gave me plenty of cats with 3 front paws though

15 days ago

wruza

Isn’t this a year old news?

It was called LCM/Turbo in SD and it generated absolute crap most of the times, just like this one. Which is likely yet another “ground-breaking” finetune of SD.

16 days ago

musicale

> Surrey announces world's first AI model for near-instant image creation on consumer-grade hardware

Kind of like what you can do on an iPhone?

15 days ago

smusamashah

Stream Diffusion already exists and gives you images as you type. Worked fine on RTX 3080

16 days ago

gloosx

creation... wow, they really love themselves by choosing that vocabulary; to create is divine, and this AI model is merely generating.

16 days ago