←back to thread

220 points Vt71fcAqt7 | 2 comments | | HN request time: 0.001s | source
Show context
cube2222 ◴[] No.41861846[source]
This looks like quite a huge breakthrough, unless I'm missing something?

~25x faster performance than Flux-dev, while offering comparable quality in benchmarks. And visually the examples (surely cherry-picked, but still) look great!

Especially since with GenAI the best way to get good results is to just generate a large amount of them and pick the best (imo). Performance like this will make that much easier/faster/cheaper.

Code is unfortunately "(Coming soon)" for now. Can't wait to play with it!

replies(4): >>41861942 #>>41863225 #>>41864501 #>>41865018 #
godelski ◴[] No.41865018[source]

  > surely cherry-picked
As someone who works in generative vision, this is one of the most frustrating aspects (especially for those with less GPU resources). There's been a silent competition for picking the best images and not showing random results (even when there are random results they may be a selected batch). So it is hard to judge actual quality until you can play around.

Also, I'm not sure what laptop that is but they say 0.37s to generate a 1024x1024 image on a 4090. They also mention that it requires 16GB VRAM. But that laptop looks like a MSI Titan, which has a 4090, and correct me if I'm wrong, but I think the 4090 is the only mobile card with 16GB?[0] (I know desktop graphics have 16 for most cards). The laptop demo takes 4s to generate a 1024x1024 image. But they are chopped down quite a bit[1]

I wonder if that's with or without TensorRT

[0] https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_proces...

[1] https://gpu.userbenchmark.com/Compare/Nvidia-RTX-4090-Laptop...

replies(3): >>41865131 #>>41867104 #>>41868207 #
noduerme ◴[] No.41868207[source]
Truthfully, I've had astonishing results from Stable Diffusion 1.4 on an M1 Mac, given the right inputs ...enough to throw my hands up and declare it a sort of magic (except for the presence of Getty Images watermarks randomly scattered around my results).

Nonetheless, as an art director, nothing I'd put into production. I guess that's because what I'm focused on is tickling the client base with something original.

replies(2): >>41870367 #>>41871491 #
1. godelski ◴[] No.41871491{3}[source]
Magic in what way? They sure are impressive tools but like all AI, they do not have an eye for finer detail. I'm really unsure most ML researchers have an eye for this oddly. But then again, most people I know that work in generative vision have no artistic hobby so I'm not sure how they can feel they can properly evaluate works. It's the subtle details that matter.
replies(1): >>41876702 #
2. noduerme ◴[] No.41876702[source]
I should've been clearer, really. What made me feel the "magic" was not prompting Stable Diffusion. It was letting it iterate on art I had already done.

I did a lot of 3D-rendered illustration back in the 1990s and early 2000s, necessarily low-polygon stuff, but things that were supposed to be life-like scenes, with tons of textures, that took a very long time to render. Including what may have been the first and only children's book illustrated with Infini-D on a Mac IIsi.

So, feeding these old renderings into StableDif with 75% bias toward the original image and a couple of basic prompts, produced results that blew my mind. It was like seeing what my illustrations could have been if I'd had a team at Pixar Studios refining them. In the sense that it was still my character art and my creation, totally recognizable, but polished and refined to a level that would have been unimaginable in 1997.