←back to thread

141 points zdw | 2 comments | | HN request time: 0.431s | source
Show context
andrewstuart ◴[] No.45665124[source]
Despite this APU being deeply interesting to people who want to do local AI, anecdotally I hear that it’s hard to get models to run on it.

Why would AMD not have focused everything it possibly has on demonstrating and documenting and fixing and showing and smoothing the path for AI on their systems?

Why does AMD come across as so generally clueless when it comes to giving developers what they want, compared to Nvidia?

AMD should do whatever it takes to avoid these sort of situations:

https://youtu.be/cF4fx4T3Voc?si=wVmYmWVIya4DQ8Ut

replies(10): >>45665138 #>>45665148 #>>45665186 #>>45665215 #>>45665736 #>>45665755 #>>45665858 #>>45665962 #>>45667229 #>>45671834 #
1. typpilol ◴[] No.45665138[source]
Any idea what makes models hard to run on it?

Just general compatibility between Nvidia and AMD for stuff that was built for Nvidia originally?

Or do you mean something else?

replies(1): >>45665453 #
2. cakealert ◴[] No.45665453[source]
It's not the models, it's the tooling. Models are just weights and an architecture spec. The tooling is how to load and execute the model on hardware.

Some UX-oriented tooling has sort of solved this problem and will run on AMD: LM Studio