←back to thread

261 points david927 | 1 comments | | HN request time: 0.207s | source

What are you working on? Any new ideas that you're thinking about?
Show context
AJRF ◴[] No.43156818[source]
I recently made a little tool for people interested in running local LLMs to figure out if their hardware is able to run an LLM in GPU memory.

https://canirunthisllm.com/

replies(10): >>43156837 #>>43156946 #>>43157271 #>>43157577 #>>43157623 #>>43157743 #>>43158600 #>>43159526 #>>43160623 #>>43163802 #
1. seafoamteal ◴[] No.43156946[source]
I've recently been looking into running local LLMs for fun on my laptop (without any GPU) and this is the one thing I've never been able to find consistent information on. This is so helpful, thank you so much! Going to try and run Llama 3.2 3B FP8 soon.