←back to thread

623 points magicalhippo | 5 comments | | HN request time: 0.002s | source
Show context
derbaum ◴[] No.42620643[source]
I'm a bit surprised by the amount of comments comparing the cost to (often cheap) cloud solutions. Nvidia's value proposition is completely different in my opinion. Say I have a startup in the EU that handles personal data or some company secrets and wants to use an LLM to analyse it (like using RAG). Having that data never leave your basement sure can be worth more than $3000 if performance is not a bottleneck.
replies(6): >>42621036 #>>42621592 #>>42622470 #>>42622485 #>>42622500 #>>42622740 #
lolinder ◴[] No.42622470[source]
Heck, I'm willing to pay $3000 for one of these to get a good model that runs my requests locally. It's probably just my stupid ape brain trying to do finance, but I'm infinitely more likely to run dumb experiments with LLMs on hardware I own than I am while paying per token (to the point where I currently spend way more time with small local llamas than with Claude), and even though I don't do anything sensitive I'm still leery of shipping all my data to one of these companies.

This isn't competing with cloud, it's competing with Mac Minis and beefy GPUs. And $3000 is a very attractive price point in that market.

replies(2): >>42623584 #>>42624306 #
ynniv ◴[] No.42623584[source]
I'm pretty frugal, but my first thought is to get two to run 405B models. Building out 128GB of VRAM isn't easy, and will likely cost twice this.
replies(1): >>42625092 #
1. rsanek ◴[] No.42625092[source]
You can get a M4 Max MBP with 128GB for $1k less than two of these single-use devices.
replies(4): >>42625963 #>>42625978 #>>42626307 #>>42627334 #
2. lolinder ◴[] No.42625963[source]
Don't these devices provide 128GB each? So you'd need to price in two Macs to be a fair comparison to two Digits.
3. ynniv ◴[] No.42625978[source]
These are 128GB each. Also, Nvidias inference speed is much higher than Apple's.

I do appreciate that my MBP can run models though!

4. layer8 ◴[] No.42626307[source]
But then you have to use macOS.
5. ganoushoreilly ◴[] No.42627334[source]
I read the Nvidia units are 250 Tflops vs the M4 Pro 27 Tflops. If they perform as advertised i'm in for two.