←back to thread

261 points david927 | 1 comments | | HN request time: 0.335s | source

What are you working on? Any new ideas that you're thinking about?
Show context
AJRF ◴[] No.43156818[source]
I recently made a little tool for people interested in running local LLMs to figure out if their hardware is able to run an LLM in GPU memory.

https://canirunthisllm.com/

replies(10): >>43156837 #>>43156946 #>>43157271 #>>43157577 #>>43157623 #>>43157743 #>>43158600 #>>43159526 #>>43160623 #>>43163802 #
dockerd ◴[] No.43163802[source]
Looks good.

Feature request - Have a leaderboard of LLM for x/y/z tasks or pull it from existing repo. Suggest the best model for given GPU for x/y/z task.

If there is better model which my GPU can run, why should I go for the lowest?

replies(1): >>43163820 #
1. dockerd ◴[] No.43163820[source]
And maybe provide ollama/lm studio run command for given model/quantization