I have a Strix Halo based HP ZBook G1A and it's been pretty easy getting local models to run on it. Training small LLMs on it has been a bit harder but doable as well. Mind you, I 'only' have 64 GB with mine.
Under Linux, getting LM Studio to work using the Vulkan backend was trivial. Llama.cpp was a bit more involved. ROCm worked surprisingly well with Arch — I would credit the package maintainers. The only hard part was sorting out Python packaging for PyTorch (use local packages with system's ROCm).
I wouldn't say it's perfect but it's definitely not as bad as it used to be. I think the biggest downside is the difference in environment when you use this as a dev machine and then run the models on NVIDIA hardware for prod.