←back to thread

454 points nathan-barry | 1 comments | | HN request time: 0.248s | source
1. blurbleblurble ◴[] No.45647025[source]
I'm more excited about approaches like this one:

https://openreview.net/forum?id=c05qIG1Z2B

They're doing continuous latent diffusion combined with autoregressive transformer-based text generation. The autoencoder and transformer are (or can be) trained in tandem.