←back to thread

183 points spacebanana7 | 1 comments | | HN request time: 0.221s | source

I appreciate developing ROCm into something competitive with CUDA would require a lot of work, both internally within AMD and with external contributions to the relevant open source libraries.

However the amount of resources at stake is incredible. The delta between NVIDIA's value and AMD's is bigger than the annual GDP of Spain. Even if they needed to hire a few thousand engineers at a few million in comp each, it'd still be a good investment.

Show context
fancyfredbot ◴[] No.43547461[source]
There is more than one way to answer this.

They have made an alternative to the CUDA language with HIP, which can do most of the things the CUDA language can.

You could say that they haven't released supporting libraries like cuDNN, but they are making progress on this with AiTer for example.

You could say that they have fragmented their efforts across too many different paradigms but I don't think this is it because Nvidia also support a lot of different programming models.

I think the reason is that they have not prioritised support for ROCm across all of their products. There are too many different architectures with varying levels of support. This isn't just historical. There is no ROCm support for their latest AI Max 395 APU. There is no nice cross architecture ISA like PTX. The drivers are buggy. It's just all a pain to use. And for that reason "the community" doesn't really want to use it, and so it's a second class citizen.

This is a management and leadership problem. They need to make using their hardware easy. They need to support all of their hardware. They need to fix their driver bugs.

replies(6): >>43547568 #>>43547675 #>>43547799 #>>43547827 #>>43549724 #>>43558036 #
trod1234 ◴[] No.43547675[source]
It is a little bit more complicated than ROCm simply not having support because ROCm has at a point claimed support, and they've had to walk it back painfully (multiple times). Its not a driver issue, nor a hardware issue on their side.

There has been a long-standing issue between AMD and its mainboard manufacturers. The issue has to do with features required for ROCm, namely PCIe Atomics. AMD has been unable or unwilling to hold the mainboard manufacturers to account for advertising features the mainboard does not support.

The CPU itself must support this feature, but the mainboard must as well (in firmware).

One of the reasons why ROCm hasn't worked in the past is because the mainboard manufacturers have claimed and advertised support for PCIe Atomics, and the support they've claimed has been shown to be false, and the software fails in non-deterministic ways when tested. This is nightmare fuel for the few AMD engineers tasked with ROCm.

PCIe Atomics requires non-translated direct IO to operate correctly, and in order to support the same CPU models from multiple generations they've translated these IO lines in firmware.

This has left most people that query their system to check this showing PCIAtomics is supported, while when actual tests that rely on that support are done they fail, in chaotic ways. There is no technical specification or advertising that the mainboard manufacturers provide showing whether this is supported. Even the boards with multiple x16 slots and the many technologies related to it such as Crossfire/SLI/mGPU brandings these don't necessarily show whether PCIAtomics is properly supported.

In other words, the CPU is supported, the firmware/mainboard fail with no way to differentiate between the two at the upper layers of abstraction.

All in all. You shouldn't be blaming AMD for this. You should be blaming the three mainboard manufacturers who chose to do this. Some of these manufacturers have upper end boards where they actually did do this right they just chose to not do this for any current gen mainboard costing less than ~$300-500.

replies(5): >>43547751 #>>43547777 #>>43547796 #>>43549200 #>>43549341 #
spacebanana7 ◴[] No.43547751[source]
How does NVIDIA manage this issue? I wonder whether they have a very different supply chain or just design software that puts less trust in the reliability of those advertised features.
replies(2): >>43547816 #>>43549279 #
bigyabai ◴[] No.43549279[source]
I should point out here, if nobody has already; Nvidia's GPU designs are extremely complicated compared to what AMD and Apple ship. The "standard" is to ship a PCIe card with display handling drivers and some streaming multiprocessor hardware to process your framebuffers. Nvidia goes even further by adding additional accelerators (ALUs by way of CUDA core and tensor cores), onboard RTOS management hardware (what Nvidia calls GPU System Processor), and more complex userland drivers that very well might be able to manage atomics without any PCIe standards.

This is also one of the reasons AMD and Apple can't simply turn their ship around right now. They've both invested heavily in simplifying their GPU and removing a lot of the creature-comforts people pay Nvidia for. 10 years ago we could at least all standardize on OpenCL, but these days it's all about proprietary frameworks and throwing competitors under the bus.

replies(1): >>43550634 #
1. kimixa ◴[] No.43550634[source]
FYI AMD also has similar "accelerators", with the 9070 having separate ALU paths for wmma ("tensor") operations much like Nvidia's model - older RDNA2/3 architectures had accelerated instructions but used the "normal" shader ALUs, if a bit beefed up and tweaked to support multiple smaller data types. And CUDA cores are just what Nvidia call their normal shader cores. Pretty much every subunit on a geforce has a direct equivalent on a radeon - they might be faster/slower or more/less capable, but they're there and often at an extremely similar level of the design.

AMD also have on-die microcontrollers (multiple, actually) that do things like scheduling or pipeline management, again just like Nvidia's GSP. It's been able to schedule new work on-GPU with zero host system involvement since the original GCN, something that Nvidia advertise as "new" with them introducing their GSP (which just replaced a slightly older, slightly less capable controller rather than being /completely/ new too)

The problem is that AMD are a software follower right now - after decades of under-investment they're behind on the treadmill just trying to keep up, so when the Next Big Thing inevitably pops up they're still busy polishing off the Last Big Thing.

I've always seen AMD as a hardware company, with the "build it and they will come" approach - which seems to have worked for the big supercomputers who likely find it worth investing in their own modified stack to get that last few %, but clearly falls down selling to "mere" professionals. Nvidia, however, support the same software APIs on even the lowest end hardware, while nobody is likely running much on their laptop's 3050m in anger, it offers a super easy on-ramp for developers - and it's easy to mistake familiarity with superiority - you already know to avoid the warts so you don't get burned by them. And believe me, CUDA has plenty of warts.

And marketing - remember "Technical Marketing" is still marketing - and to this day lots of people believe that the marketing name for something, or branding a feature, implies anything about the underlying architecture design - go to an "enthusiast" forum and you'll easily find people claiming that because Nvidia call their accelerator a "core" means it's somehow superior/better/"more accelerated" than the direct equivalent on a competitor, or actually believe that it just doesn't support hardware video encoding as it "Doesn't Have NVENC" (again, GCN with video encoding was released before a Geforce with NVENC). Same with branding - AMD hardware can already read the display block's state and timestamp in-shader, but Everyone Knows Nvidia Introduced "Flip Metering" With Blackwell!