←back to thread

282 points montyanderson | 2 comments | | HN request time: 0s | source
Show context
tootie ◴[] No.45674919[source]
Kinda terrifying. And it can run in 32GB of VRAM? Anyone with a 5090 can start spewing out believable fake videos.
replies(2): >>45675299 #>>45675724 #
aussieguy1234 ◴[] No.45675299[source]
The other option is to rent a 5090 in the cloud. Probably less than 0.50 per hour at most providers.
replies(1): >>45676766 #
latchkey ◴[] No.45676766[source]
If people are interested, I could split up each of our MI300x into 4 and then charge $0.50 (1/4th our current rate). You'd get 48GB of vram instead of 32GB and it would be HBM3 instead of GDDR7 (5.3TB/s vs 1.7TB/s).

The only catch is that I'd need to get 32 people who want VMs like this since I would have to do it for the entire box of compute.

Wan2.2 runs just fine on AMD.

replies(1): >>45677008 #
1. ricardobeat ◴[] No.45677008[source]
Vultr does this: https://www.vultr.com/pricing/#cloud-gpu

Though only a shared A40/A100 are in that price range.

replies(1): >>45677090 #
2. latchkey ◴[] No.45677090[source]
It really is an unfortunate thing that pricing is so opaque and non-transparent in this industry. You look at one price and that's it. The reality is more complex.

Vultr is a box of 8 minimum and not on-demand and they don't offer VMs.

On the other hand, I offer the bare minimum (1 GPU for 1 minute) (or 2, 4, 8x), on-demand, no-contract, and an API to automate it all. We also have 100G unlimited bandwidth and free IPv4. Oh and our 8x box specs are generally better... 122TB of enterprise NVMe.