←back to thread

261 points david927 | 1 comments | | HN request time: 0s | source

What are you working on? Any new ideas that you're thinking about?
Show context
AJRF ◴[] No.43156818[source]
I recently made a little tool for people interested in running local LLMs to figure out if their hardware is able to run an LLM in GPU memory.

https://canirunthisllm.com/

replies(10): >>43156837 #>>43156946 #>>43157271 #>>43157577 #>>43157623 #>>43157743 #>>43158600 #>>43159526 #>>43160623 #>>43163802 #
kristopolous ◴[] No.43160623[source]
This looks closed source, am I correct?
replies(1): >>43160855 #
1. AJRF ◴[] No.43160855[source]
Not so much purposefully closed source more that I don't want to make it complex by splitting out the data the app uses from the code (co-ordination problem when it comes to deploying that I don't want to deal with for a project of this size).

When it comes to "how to do the math" this repo was my starting point: https://github.com/Raskoll2/LLMcalc