←back to thread

261 points david927 | 1 comments | | HN request time: 0.001s | source

What are you working on? Any new ideas that you're thinking about?
Show context
AJRF ◴[] No.43156818[source]
I recently made a little tool for people interested in running local LLMs to figure out if their hardware is able to run an LLM in GPU memory.

https://canirunthisllm.com/

replies(10): >>43156837 #>>43156946 #>>43157271 #>>43157577 #>>43157623 #>>43157743 #>>43158600 #>>43159526 #>>43160623 #>>43163802 #
1. alecco ◴[] No.43157577[source]
Cool. What about giving the models for a given GPU? Also it could compare using vLLM, local_llama.c, etc. Links to docs maybe. Community build articles and rating. Along the lines of https://pcpartpicker.com/

And you can definitely add some ref links for a bit of revenue.