I recently made a little tool for people interested in running local LLMs to figure out if their hardware is able to run an LLM in GPU memory.
replies(10):
When it comes to "how to do the math" this repo was my starting point: https://github.com/Raskoll2/LLMcalc