What hardware are y'all using when you run these things locally? I was thinking of pre ordering the Framework desktop[0] for this purpose, but I wouldn't mind having a decent laptop that could run it (ideally Linux).
replies(4):
For local LLMs Apple Silicon has really shown the value of shared memory, even if that comes at the cost of raw GPU power. Even if it's half the speed of an array of GPUs, being able to load the mid-sized models at all is a huge plus.