←back to thread

221 points whitefables | 1 comments | | HN request time: 0.231s | source
Show context
varun_ch ◴[] No.41856480[source]
I’m curious about how good the performance with local LLMs is on ‘outdated’ hardware like the author’s 2060. I have a desktop with a 2070 super that it could be fun to turn into an “AI server” if I had the time…
replies(7): >>41856521 #>>41856558 #>>41856559 #>>41856609 #>>41856875 #>>41856894 #>>41857543 #
1. alias_neo ◴[] No.41857543[source]
You can get a relative idea here: https://developer.nvidia.com/cuda-gpus

I use a Tesla P4 for ML stuff at home, it's equivalent to a 1080 Ti, and has a score of 7.1. A 2070 (they don't list the "super") is a 7.5.

For reference, 4060 Ti, 4070 Ti, 4080 and 4090 are 8.9, which is the highest score for a gaming graphics card.