←back to thread

268 points prashp | 1 comments | | HN request time: 0.224s | source
Show context
joefourier ◴[] No.39215949[source]
I’ve done a lot of experiments with latent diffusion and also discovered a few flaws in the SD VAE’s training and architecture, which have hardly no attention brought to them. This is concerning as the VAE is a crucial competent when it comes to image quality and is responsible for many of the artefacts associated with AI generated imagery, and no amount of training the diffusion model will fix them.

A few I’ve seen are:

- The goal should be to have latent outputs as closely resemble gaussian distributed terms between -1 and 1 with a variance of 1, but the outputs are unbounded (you could easily clamp or apply tanh to force them to be between -1 and 1), and the KL loss weight is too low, hence why the latents are weighed by a magic number to more closely fit the -1 to 1 range before being invested by the diffusion model.

- To decrease the computational load of the diffusion model, you should reduce the spatial dimensions of the input - having a low number of channels is irrelevant. The SD VAE turns each 8x8x3 block into a 1x1x4 block when it could be turning it into a 1x1x8 (or even higher) block and preserve much more detail at basically 0 computational cost, since the first operation the diffusion model does is apply a convolution to greatly increase the number of channels.

- The discriminator is based on a tiny PatchGAN, which is an ancient model by modern standards. You can have much better results by applying some of the GAN research of the last few years, or of course using a diffusion decoder which is then distilled either with consistency or adversarial distillation.

- KL divergence in general is not even the most optimal way to achieve the goals of a latent diffusion model’s VAE, which is to decrease the spatial dimensions of the input images and have a latent space that’s robust to noise and local perturbations. I’ve had better results with a vanilla AE, clamping the outputs, having a variance loss term and applying various perturbations to the latents before they are ingested by the decoder.

replies(6): >>39216175 #>>39216367 #>>39216653 #>>39217093 #>>39219506 #>>39316949 #
Cacti ◴[] No.39216367[source]
All your points are good ones and were knowable by any researcher at the time who wasn’t, idk, a new grad or new to CV. I always assumed they just threw the VAE in there using the default options from the original VAE paper and never thought about it much again, or never looked into it due to the training cost (for hyperparam search, mainly). I don’t remember most of the points you raised being common knowledge when the VAE paper came out, but they certainly were when the stable diffusion paper came out.
replies(1): >>39216692 #
michaelt ◴[] No.39216692[source]
> All your points are good ones and were knowable by any researcher at the time who wasn’t, idk, a new grad or new to CV.

I think you are radically overstating how obvious some of these things are.

What you call "just threw the VAE in there using the default options from the original VAE paper" is what another person might call "used a proven reference implementation, with the settings recommended by its creator"

Sure, there are design flaws with SD1.0 which feel obvious today - they've published SDXL and having read the paper, I wouldn't even consider going about such a project without "Conditioning the Model on Cropping Parameters". But the truth is this stuff is only obvious to me because someone else figured it out and told me.

replies(1): >>39220850 #
1. Cacti ◴[] No.39220850[source]
I’m not criticizing them or the approach. That’s what I would have done most likely. But the things you mentioned aren’t particular to stable diffusion, or even VAEs. Yes, the best way to learn is to be told or to build up applied/implemen6ation experience until you learn them directly. But almost any CV model will run into at least one of those issues, and I would expect someone with idk > 1y experience in applied work to know these things. Perhaps I am wrong to do that.