←back to thread

AMD GPU Debugger

(thegeeko.me)
276 points ibobev | 2 comments | | HN request time: 0s | source
Show context
whalesalad ◴[] No.46195861[source]
Tangent: is anyone using a 7900 XTX for local inference/diffusion? I finally installed Linux on my gaming pc, and about 95% of the time it is just sitting off collecting dust. I would love to put this card to work in some capacity.
replies(8): >>46196239 #>>46196248 #>>46196327 #>>46196716 #>>46197073 #>>46197542 #>>46197967 #>>46202515 #
1. jjmarr ◴[] No.46197542[source]
I've been using it for a few years on Gentoo. There were challenges with Python 2 years ago, but over the past year it's stabilized and I can even do img2video which is the most difficult local inference task so far.

Performance-wise, the 7900 xtx is still the most cost effective way of getting 24 gigabytes that isn't a sketchy VRAM mod. And VRAM is the main performance barrier since any LLM is going to barely fit in memory.

Highly suggest checking out TheRock. There's been a big rearchitecting of ROCm to improve the UX/quality.

replies(1): >>46200978 #
2. androiddrew ◴[] No.46200978[source]
Bought a Radeon r9700. 32GB vram and it does a good job.