It runs well, not much difference to Claude etc but still learning the ropes and how to get the best out of it and local llms in general. Having tonnes of memory is nice for switching out models in ollama quickly since everything stays in cache.
The GPU memory is the weak point though so I'm mostly using models up to 18b parameters that can fit in the vram.