←back to thread

Devstral

(mistral.ai)
701 points mfiguiere | 4 comments | | HN request time: 0.413s | source
Show context
christophilus ◴[] No.44058247[source]
What hardware are y'all using when you run these things locally? I was thinking of pre ordering the Framework desktop[0] for this purpose, but I wouldn't mind having a decent laptop that could run it (ideally Linux).

[0] https://frame.work/desktop

replies(4): >>44058269 #>>44058281 #>>44058363 #>>44058499 #
klooney ◴[] No.44058363[source]
AMD is going to be off the beaten path, you're likely to have more success/less boring plumbing trouble with nVidia.
replies(1): >>44058385 #
1. lolinder ◴[] No.44058385[source]
Does Nvidia have integrated memory options that allow you to get up to 64GB+ of VRAM without stringing together a bunch of 4090s?

For local LLMs Apple Silicon has really shown the value of shared memory, even if that comes at the cost of raw GPU power. Even if it's half the speed of an array of GPUs, being able to load the mid-sized models at all is a huge plus.

replies(2): >>44058797 #>>44058947 #
2. kookamamie ◴[] No.44058797[source]
Not quite, but I do have an Ada 6000, which has 48GB.
3. karolist ◴[] No.44058947[source]
RTX Pro 6000 Blackwell has 96GB VRAM.
replies(1): >>44061498 #
4. lolinder ◴[] No.44061498[source]
It also costs 4x the entire Framework Desktop for just the card. If you're doing something professional that's probably worth it, but it's not a clear winner in the enthusiast space.