I've been dabbling with local ML projects, and trying to get them to run with ROCm on my Radeon 7900 XTX card. All the solutions to run for example Llama.cpp or Automatic1111 are a bit hacky, so I made a repo where I document how to run them in containers.
https://github.com/Krisseck/ROCm-Docker-Scripts
Needs more documentation and more projects, but all contributions are welcome!