I passed on the machine, but we set up gpt-oss-120b on a 128GB RAM Macbook pro and it is shockingly usable. Personally, I could imagine myself using that instead of OpenAI's web interface. The Ollama UI has web search working, too, so you don't have to worry about the model knowing the latest and greatest about every software package. Maybe one day I'll get the right drivers to run a local model on my Linux machine with AMD's NPU, too, but AMD has been really slow on this.
replies(1):