←back to thread

141 points zdw | 1 comments | | HN request time: 0.2s | source
Show context
andrewstuart ◴[] No.45665124[source]
Despite this APU being deeply interesting to people who want to do local AI, anecdotally I hear that it’s hard to get models to run on it.

Why would AMD not have focused everything it possibly has on demonstrating and documenting and fixing and showing and smoothing the path for AI on their systems?

Why does AMD come across as so generally clueless when it comes to giving developers what they want, compared to Nvidia?

AMD should do whatever it takes to avoid these sort of situations:

https://youtu.be/cF4fx4T3Voc?si=wVmYmWVIya4DQ8Ut

replies(10): >>45665138 #>>45665148 #>>45665186 #>>45665215 #>>45665736 #>>45665755 #>>45665858 #>>45665962 #>>45667229 #>>45671834 #
1. drcongo ◴[] No.45667229[source]
I don't know why you're getting downvotes on this, my experience matches it. I have an Evo-X2 which has Strix Halo and rocm still doesn't officially support it. "Support" is supposedly coming in 7.0.2 which can be installed as a preview at the moment but people are still getting regular and random GPU Hang errors with it. I'm running Arch and I've had to make a bunch of tasks in a mise.toml so that I don't forget the long list environment variables to override various rocm settings, and even longer list of arcane incantations required to update rocm and PyTorch to versions that actually almost work with each other.