←back to thread

261 points david927 | 1 comments | | HN request time: 0.388s | source

What are you working on? Any new ideas that you're thinking about?
1. Risse ◴[] No.43156834[source]
I've been dabbling with local ML projects, and trying to get them to run with ROCm on my Radeon 7900 XTX card. All the solutions to run for example Llama.cpp or Automatic1111 are a bit hacky, so I made a repo where I document how to run them in containers.

https://github.com/Krisseck/ROCm-Docker-Scripts

Needs more documentation and more projects, but all contributions are welcome!