←back to thread

396 points doener | 1 comments | | HN request time: 0.001s | source
Show context
pawelduda ◴[] No.46174861[source]
Did anyone test it on 5090? I saw some 30xx reports and it seemed very fast
replies(2): >>46175501 #>>46177259 #
Wowfunhappy ◴[] No.46175501[source]
Even on my 4080 it's extremely fast, it takes ~15 seconds per image.
replies(1): >>46177791 #
accrual ◴[] No.46177791[source]
Did you use PyTorch Native or Diffusers Inference? I couldn't get the former working yet so I used Diffusers, but it's terribly slow on my 4080 (4 min/image). Trying again with PyTorch now, seems like Diffusers is expected to be slow.
replies(1): >>46177830 #
Wowfunhappy ◴[] No.46177830{3}[source]
Uh, not sure? I downloaded the portable build of ComfyUI and ran the CUDA-specific batch file it comes with.

(I'm not used to using Windows and I don't know how to do anything complicated on that OS. Unfortunately, the computer with the big GPU also runs Windows.)

replies(1): >>46177979 #
1. accrual ◴[] No.46177979{4}[source]
Haha, I know how it goes. Thanks, I'll give that a try!

Update: works great and much faster via ComfyUI + the provided workflow file.