Most active commenters
  • lhl(4)
  • rbanffy(4)
  • sroussey(3)
  • KeplerBoy(3)
  • llm_trw(3)

←back to thread

212 points pella | 51 comments | | HN request time: 1.812s | source | bottom
1. btown ◴[] No.42748940[source]
I've often thought that one of the places AMD could distinguish itself from NVIDIA is bringing significantly higher amounts of VRAM (or memory systems that are as performant as what we currently know as VRAM) to the consumer space.

A card with a fraction of the FLOPS of cutting-edge graphics cards (and ideally proportionally less power consumption), but with 64-128GB VRAM-equivalent, would be a gamechanger for letting people experiment with large multi-modal models, and seriously incentivize researchers to build the next generation of tensor abstraction libraries for both CUDA and ROCm/HIP. And for gaming, you could break new grounds on high-resolution textures. AMD would be back in the game.

Of course, if it's not real VRAM, it needs to be at least somewhat close on the latency and bandwidth front, so let's pop on over and see what's happening in this article...

> An Infinity Cache hit has a load-to-use latency of over 140 ns. Even DRAM on the AMD Ryzen 9 7950X3D shows less latency. Missing Infinity Cache of course drives latency up even higher, to a staggering 227 ns. HBM stands for High Bandwidth Memory, not low latency memory, and it shows.

Welp. Guess my wish isn't coming true today.

replies(10): >>42749016 #>>42749039 #>>42749048 #>>42749096 #>>42749201 #>>42749629 #>>42749785 #>>42749805 #>>42752432 #>>42752946 #
2. formerly_proven ◴[] No.42749016[source]
Totally normal latencies for a GPU though.
3. pkroll ◴[] No.42749039[source]
You're not the only one thinking that: https://www.nvidia.com/en-us/project-digits/

128G of unified memory. $3K. Throw ollama and ComfyUI on that sucker and things could get interesting. The question is how much slower than a 5090, is this gonna be? The memory bandwidth isn't going to match a 512 bit bus.

replies(4): >>42749113 #>>42750477 #>>42750999 #>>42756776 #
4. mpercival531 ◴[] No.42749048[source]
They are. Strix Halo is going after that same space of Apple M4 Pro/Max where it is currently unchallenged. Pairing it with two 64GB LPCAMM2 modules will get you there.

Edit: The problem with AMD is less the hardware offerings, but more that their compute software stack historically tends to handwave or be very slow with consumer GPU support — even more so with their APUs. Maybe the advent of MI300A will change the equation, maybe not.

replies(2): >>42749929 #>>42752317 #
5. Fade_Dance ◴[] No.42749096[source]
Assuming we are comparing chips that are using the latest generation/high density memory modules, a wider bus width is required for larger memory counts, which is expensive when it comes to silicon area. Therefore, if AMD is willing to boost up memory count as a competitive advantage, they may as well also consider using that die space for more logic gates as well. It's a set of trade-offs and an optimization problem to some degree.

That said, when an incumbent has a leadership advantage, one of the obvious ways to boost profit is to slash the memory bus width, and then a competitor can come in and bring it up a bit and have a competitive offering. The industry has certainly seen this pattern many times. But as far as AMD coming in and using gigantic memory counts as a competitive advantage? You have to keep in mind the die space constraints.

Well over a decade ago - I think it was R600 - AMD did take this approach, and it was fairly disastrous because the logic performance of the chip wasn't good enough while the die was too big and hot and yields were too low. They didn't strike the right balance and sacrificed too much for a 512-bit memory bus.

AMD has also tried to sidestep some of these limitations with HBM back when it was new technology, but that didn't work out for them either. They actually would have been better off just increasing bus width and continuing to use the most optimized and cost efficient commodity memory chips in that case.

Data center and such may have a bit more freedom for innovation but the consumer space is definitely stuck on the paradigm of GPU plus nearby mem chips, and going outside of that fence is a huge latency hit.

replies(2): >>42749451 #>>42752086 #
6. lostmsu ◴[] No.42749113[source]
AFAIK this uses even slower memory.
replies(1): >>42749985 #
7. enragedcacti ◴[] No.42749201[source]
> Of course, if it's not real VRAM, it needs to be at least somewhat close on the latency and bandwidth front

It is close to VRAM*, just not close to DRAM on a conventionally designed CPU. This thing is effectively just a GPU that fits in a CPU slot and has CPU cores bolted to the side. This approach has the downside of worse CPU performance and the upsides of orders of magnitude faster CPU<->GPU communication, simpler programming since coherency is handled for you, and access to substantial amounts of high bandwidth memory (up to 512GB with 4 MI300As).

* https://chipsandcheese.com/p/microbenchmarking-nvidias-rtx-4...

replies(1): >>42751139 #
8. amluto ◴[] No.42749451[source]
> a wider bus width is required for larger memory counts, which is expensive when it comes to silicon area

I find this constraint to be rather odd. An extra, say, three address bits would add very little space (or latency in a serial protocol) to a memory bus, and the actual problem seems to be that the current generation of memory chips are intended for point-to-point connection.

It seems to me that, if the memory vendors aren’t building physically larger, higher capacity chips, then any of the major players (AMD, Nvidia, Intel, whoever else is in this field right now) could kludge around it with a multiplexer. A multiplexer would need to be somewhat large, but its job would be simple enough that it should be doable with an older, cheaper process and without using entirely unreasonable amounts of power.

So my assumption is this is mostly an economic issue. The vendors don’t think it’s worthwhile to do this.

replies(2): >>42749962 #>>42751635 #
9. 0934u934y9g ◴[] No.42749629[source]
The problem with only providing VRAM is that some AI things like real time audio processing under preform significantly because it does not have the equivalent of tensor cores to keep up. There are LLM's that won't run for the same reason. You will have more than enough VRAM but not enough tensor cores. AMD isn't able to compete.
10. therealpygon ◴[] No.42749785[source]
I wholeheartedly agree. Nvidia is intentionally suppressing the amount of memory on their consumer GPUs to prevent data centers from using consumer cards rather than their far more expensive counterparts. The fact that they used to offer the 3060 with 12GB, but have now pushed the pricing higher and limited many cards to 8GB is a testament to the fact they are. I don’t need giga-TOPS with 8-16gb of memory, I’d be perfectly happy with half that speed but with 64gb of memory or more. Even slower memory would be fine. I don’t need 1000t/s, but being able to load a reasonable intelligent model even at 50t/s would be great.
replies(1): >>42749897 #
11. SecretDreams ◴[] No.42749805[source]
If, by the grace of tech Jesus, amd gave us such systems at volumes Nvidia would notice, Nvidia would simply then do the same but with a better ecosystem.

The biggest problem for AMD is not that the majority of people want to use AMD. It is that the majority of people want AMD to be more competitive so that Nvidia will be forced to drop prices so that people can afford Nvidia products.

Until this pattern changes, AMD has a big uphill battle. Same for Intel, except Intel is at least seemingly doing great gen/gen improvements in mid/low range consumer GPUs and bringing healthy vram along for the ride.

replies(3): >>42751294 #>>42751410 #>>42752404 #
12. lhl ◴[] No.42749897[source]
Getting to 50 tok/s for a big model requires not just memory, but also memory bandwidth. Currently, 1TB/s of MBW will get a 70B Q4 (~40GB) model to about 20-25 tok/s. The good thing is models continue to get smarter - today's 20-30B models beat out last years 70B models on most tasks and the biggest open models like DeepSeek-v3 might have lots of weights, but actually a relatively reasonable # of activations/pass.

You can test out your half the speed but w/ 64GB or more of memory w/ the latest Macs, AMD Strix Halo, or the upcoming Nvidia Digits, though. I suspect by the middle of the year there will be a bunch of options in the ~$3K range. Personally, I think I'd rather go for 2 x 5090s for 64GB of memory at 1.7TB/s than 96 or 128GB w/ only 250GB/s of MBW.

replies(1): >>42749979 #
13. lhl ◴[] No.42749929[source]
I don't know of any non-soldered memory Strix Halo devices, but both HP and Asus have announced 128GB SKUs (availability unknown).

For LLM inference, basically everything works w/ ROCm on RDNA3 now (well, Flash Attention is via Triton and doesn't have support for SWA and some other stuff; also I mostly test on Linux, although I did check that the new WSL2 support works). I've tested some older APUs w/ basic benchmarking as well. Notes here for those interested: https://llm-tracker.info/howto/AMD-GPUs

replies(1): >>42750062 #
14. sroussey ◴[] No.42749962{3}[source]
Bus width they are talking about are multiples of 128. I think Apple m series chips are good examples. They go from 128 to 256 to 512 bits which just happens to be roughly about the megabytes per second bandwidth.
15. sroussey ◴[] No.42749979{3}[source]
A Mac with that memory will have closer to 500GB/s but your point still stands.

That said, if you just want to play around, having more memory will let you do more interesting things. I’d rather have that option over speed since I won’t be doing production inference serving on my laptop.

replies(1): >>42750021 #
16. sroussey ◴[] No.42749985{3}[source]
And a fraction of the 5090 cores.
17. lhl ◴[] No.42750021{4}[source]
Yeah, the M4 Max actually has pretty decent MBW - 546 GB/s (cheapest config is $4.7K on a 14" MBP atm, but maybe there will be a Mac Studio at some point). The big weakness for the Mac is actually the lack of TFLOPS on the GPU - the beefiest maxes out at ~34 FP16 TFLOPS. It makes a lot of use cases super painful, since prefill/prompt processing can take minutes before token generation starts.
18. UncleOxidant ◴[] No.42750062{3}[source]
Thanks for that link. I'm interested in either getting the HP Mini Z1 G1a or an NVidia Digits for LLM experimentation. The obvious advantage for the Digits is the CUDA ecosystem is much more tried & true for that kind of thing. But the disadvantage is trying to use it as a replacement for my current PC as well as the fact that it's going to run an already old version of Ubuntu (22.04) and you're dependent on Nvidia for updates.
replies(2): >>42750176 #>>42750988 #
19. lhl ◴[] No.42750176{4}[source]
Yeah, I think anyone w/ old Jetsons knows what it's like to be left high and dry by Nvidia's embedded software support. Older models are basically just ewaste. Since the Digits won't be out until May, I guess there's enough time to wait and see - at least to get a sense of what the actual specs are. I have a feeling the FP16 TFLOPS and the MBW are going to be much lower than what people have been hyping themselves up for.

Sadly, my feeling is that the big Strix Halo SKUs (which have no scheduled release dates) aren't going to be competitively priced (they're likely to be at a big FLOPS/real-world performance disadvantage, and there's still the PITA factor), but there is something appealing about about the do-it-all aspect of it.

replies(1): >>42751178 #
20. manojlds ◴[] No.42750477[source]
It's LPDDR5.
replies(1): >>42752524 #
21. KeplerBoy ◴[] No.42750988{4}[source]
Who said anything about Ubuntu 22.04? I mean sure that's the newest release current jetpack comes with, but I'd be surprised if they shipped digits with that.
replies(1): >>42751155 #
22. KeplerBoy ◴[] No.42750999[source]
It's going to be waaay slower than a 5090. We're looking at something like 60W TDP for the entire system vs 600W for a 5090 GPU.

It's going to be very energy efficient, it will get plenty of flops, but they won't be able to cheat physics.

23. rbanffy ◴[] No.42751139[source]
I was curious because given the latencies between the CCXs, the number of NUMA domains seems small.
24. rbanffy ◴[] No.42751155{5}[source]
Doesn’t DGX OS use the latest LTS version? Current should be 24.04.
replies(1): >>42751395 #
25. rbanffy ◴[] No.42751178{5}[source]
DIGITS looks like a serious attempt, but they don’t have too much of an incentive to have people developing for older hardware. I wouldn’t expect them to supor it for more than five years. At least the underlying Ubuntu will last more than that and provide a viable work environment far beyond the time it gets really boring.
replies(1): >>42751801 #
26. holoduke ◴[] No.42751294[source]
It can change quickly. Great example is the short domination of the ati 9700 that crushed nvidia for a short while.
27. KeplerBoy ◴[] No.42751395{6}[source]
I wouldn't know. I only work with workstation or jetson stuff.

The DGX documentation and downloads aren't public afaik.

Edit: Nevermind, some information about DGX is public and they really are on 22.04, but oh well, the deep learning stack is guaranteed to run.

https://docs.nvidia.com/base-os/too

28. llm_trw ◴[] No.42751410[source]
The same could ba said for CPUs from Intel and AMD 5 years ago. Now people, myself included, buy AMD because it is simply the better choice.
replies(1): >>42752517 #
29. formerly_proven ◴[] No.42751635{3}[source]
GDDR has been point-to-point since... I dunno, probably 2000? Because cet par you can't really have an actual bus when you chase maximum bandwidth. Even the double-sided layouts (like T-layout, with <2mm stubs) typically incur a reduction in data rate. These also dissipate a fair amount of heat, you're looking at around 5-8 W per chip (~6 pJ/bit), it's not like you can just stack a bunch of those dies.

> A multiplexer would need to be somewhat large, but its job would be simple enough that it should be doable with an older, cheaper process and without using entirely unreasonable amounts of power.

I don't know what you're basing that on. We're talking about 32 Gbps serdes here. Yes, there's multiplexers even for that. But what good is deciding which memory chip you want to use on boot-up?

replies(1): >>42752242 #
30. UncleOxidant ◴[] No.42751801{6}[source]
If only they could get their changes upstreamed to Ubuntu (and possible kernel mods upstreamed), then we wouldn't have to worry about it.
replies(1): >>42751874 #
31. rbanffy ◴[] No.42751874{7}[source]
Getting their kernel mods upstreamed is very unlikely, but they might provide just enough you can build a new kernel with the same major version number.
32. Dylan16807 ◴[] No.42752086[source]
> a wider bus width is required for larger memory counts

Most video cards wire up 32 data pins to each memory chip. But GDDR chips already have full support for running 16 pins to each chip. And DDR commonly goes down to 4 data pins per chip.

The latest GDDR7 chips are 24Gbit, and at 16 bits each you could fit 48GB onto a nice easy 256 bit bus, with a speed of at least 1TB/s. If you use 384 bits and/or send 8 to each chip, you can cram in so many chips it becomes a matter of fitting everything.

33. amluto ◴[] No.42752242{4}[source]
Not multiplexed on boot — multiplexed at run time. Build a chip that speaks the GDDR protocol to the host GPU and has 2-4 GDDR channels coming out the other end and aggregates the attached memory at the cost of an extra chip, some latency, some power, and an extra chip. As far as the GPU is concerned, it’s an extra large GDDR chip, and it would allow a GPU vendor to squeeze in more RAM without adding more pins to the GPU or needing to route more memory channels directly to it.

(Compare to something like Apple’s designs or “Project Digits”. Current- and next-gen GPUs have considerably higher memory bandwidth but considerably less memory capacity. Mostly my point is that I think Nvidia or AMD could make a desktop-style GPU with 2-4x the RAM, somewhat worse latency, but otherwise equivalent performance without needing Samsung or another vendor to build higher capacity GDDR chips than currently exist.)

34. Dylan16807 ◴[] No.42752317[source]
> Pairing it with two 64GB LPCAMM2 modules will get you there.

It gets you closer for sure. But while ~250GB/s is a whole lot better than SO-DIMMs at ~100GB/s, the new mid-tier GPUs are probably more like 640-900GB/s.

35. AnthonyMouse ◴[] No.42752404[source]
> If, by the grace of tech Jesus, amd gave us such systems at volumes Nvidia would notice, Nvidia would simply then do the same but with a better ecosystem.

Not if they have "a better ecosystem" -- they would continue to charge a premium for that.

Which creates a dilemma for Nvidia. If they would match AMD's pricing, they'd be losing all the money they could get by charging more, which is a ton. Whereas if they charge more, they get more today from the people who pay the premium, but some people are more price sensitive than others, so there are still a lot of people who would buy "lots of VRAM for less money" from AMD. And soon AMD has a lot of users, improves their software support and the difference disappears entirely.

Forcing the larger competitor into that dilemma is very much to the advantage of the smaller competitor.

36. Aurornis ◴[] No.42752432[source]
> AMD would be back in the game.

The market for prosumer cards with high VRAM and low FLOPS would be negligibly small. The data center market is massive on one end and the gaming market is big on the other. Casual consumers who just want a lot of VRAM are such a small minority of people that it doesn’t matter to the bottom line.

It also wouldn’t be financially advantageous to divert RAM chips away from data center production. We don’t have a surplus of chips waiting to be installed, so building out high VRAM but affordable cards would only take away from higher margin products in the datacenter space.

replies(4): >>42752564 #>>42752667 #>>42753047 #>>42753803 #
37. MindSpunk ◴[] No.42752517{3}[source]
The difference with AMD and Intel when zen launched is that AMD launched a product that utterly destroyed Intel’s line up in productivity workloads. Zen 1 launched with double the cores of the competing intel chip at the same price point. The benchmarks were a bloodbath and intel struggled to respond with a competitive product for 4 years. Arguably they still haven’t caught up. AMD just brutally out executed Intel.

Doing that to nvidia would be a tall order

replies(1): >>42753844 #
38. ein0p ◴[] No.42752524{3}[source]
That's actually a good thing. That's how you get a ton of DRAM without it costing a fortune. M2 Ultra is able to get GPU-like 800GB/sec with DDR4. From that it follows that if you can design a specialized chip, you can get a respectable 1 TB/sec quite easily with LPDDR5, provided that you're willing to design a chip to support a ton of memory channels (and potentially also a wider memory bus). In fact, I'm baffled that such devices don't already exist outside Apple's product line. Seems like a rather obvious thing to do, and Apple has a "proof of concept" already. I can think of at least four companies off the top of my head that could do it quite easily, besides Apple.
39. albertzeyer ◴[] No.42752564[source]
You might be true for the market.

However, that target audience, those hobby enthusiasts, hobby developers, also university labs with low budget, those are the people who will develop the future open source frameworks, and ultimately/implicitly those are the people who can have a quite big impact on the future development of brand recognition and the open source ecosystem around the hardware. Those people can shape the future trends.

So, only looking at the market, how much units you would sell here, that totally ignores the impact this might have indirectly in the future.

replies(1): >>42758586 #
40. jph00 ◴[] No.42752667[source]
Actually there's a lot of demand in the AI data center space for such a card, such as for running large mixture of experts (MoE) models -- e.g. DeepSeek v3, which is one of the best LLMs in the world today.

Although AMD would need to greatly improve their entire software stack to make running AI models on AMD an attractive proposition.

41. treesciencebot ◴[] No.42752946[source]
For traditional LLMs this might be true (especially large MoEs at bs=1) but I highly disagree with "multi-modal models" phrase since most of the models that output in other modalities are generally compute bound. Which means less flops will make the experience so much worse (imagine waiting a couple minutes for an image and hours for videos).
42. bsder ◴[] No.42753047[source]
> The market for prosumer cards with high VRAM and low FLOPS would be negligibly small.

I don't agree. I regularly get VSCode crashing because it ran out of VRAM.

8GB VRAM starts to feel cramped when you have to composite multiple web browsers (aka Electron apps) onto your 4K monitor screen.

nVidia not offering 16GB on consumer level cards is purely a market segmentation strategy and AMD should make them pay for it.

43. kouteiheika ◴[] No.42753803[source]
> The market for prosumer cards with high VRAM and low FLOPS would be negligibly small. The data center market is massive on one end and the gaming market is big on the other. Casual consumers who just want a lot of VRAM are such a small minority of people that it doesn’t matter to the bottom line.

I'm sure this is also what AMD is thinking, and it's also why they will never catch up to NVidia in ecosystem and software support.

It's not for the casual consumers, and it's not supposed to make money directly! You want these high VRAM SKUs to attract enthusiast and researchers. I have read a staggering amount of research papers where the authors used some random consumer NVidia GPU. Do you know how many I've read which used AMD GPUs? Big fat ZERO! You want to incentivize these people to use your hardware? You want to get devs to support your platform? Give them a unique value proposition that the competition won't.

I'm currently waiting for the 5090 to be available, and I'm going to buy two of them. If AMD would have released a GPU at a fair price, with reasonable performance and double the VRAM that NVidia offers, do you know what would I do? I would buy two AMD cards instead, port my software to it, and contribute PRs to any upstream software that I use so that it works with these cards. But alas, here we are.

replies(1): >>42758563 #
44. llm_trw ◴[] No.42753844{4}[source]
Core wise Intel had the advantage until the last generation or two. The same can be true for gpus, just add a ton more memory and watch them fly off the shelves.
replies(3): >>42757131 #>>42761138 #>>42765238 #
45. Keyframe ◴[] No.42756776[source]
I think digits is STARTS AT $3k. We'll see.
46. SecretDreams ◴[] No.42757131{5}[source]
Intel P cores still do well against amd zen5. But their stacking cache is chef's kiss.
47. almostgotcaught ◴[] No.42758563{3}[source]
> You want these high VRAM SKUs to attract enthusiast and researchers. I have read a staggering amount of research papers where the authors used some random consumer NVidia GPU. Do you know how many I've read which used AMD GPUs? Big fat ZERO!

I'm just sitting here wondering how you think this affects anything? Enterprise doesn't buy DC cards based on research papers so why does it matter if research papers are or aren't written against one brand or the other.

replies(1): >>42774014 #
48. almostgotcaught ◴[] No.42758586{3}[source]
> However, that target audience, those hobby enthusiasts, hobby developers, also university labs with low budget, those are the people who will develop the future open source frameworks,

No they're not. Y'all are deluded. There's a reason why the are only two real DNN frameworks and both of them are developed at the two biggest tech companies in the world.

49. mschuster91 ◴[] No.42761138{5}[source]
> The same can be true for gpus, just add a ton more memory and watch them fly off the shelves.

Yeah... for datacenters and people attempting to jump on the AI hype train. Meanwhile your everyday regular gamer has zero chance competing for GPUs with the infinite money coffers from AI.

Seriously, the sooner this crazy bubble bursts the better. I thought the shitcoin mining days were bad but at least everyone back then knew the game for GPUs was over once the first Bitcoin ASIC was released, but now? No end in sight and frankly I'm pissed.

replies(1): >>42764690 #
50. llm_trw ◴[] No.42764690{6}[source]
The AI bubble will burst the same way the internet bubble did:

First explosively, then never.

51. MindSpunk ◴[] No.42765238{5}[source]
Intel can match or outperform Zen 5 in many benchmarks (X3D still trashes them in games) and are trading blows now, they just have to use double the power envelope to do it.

Arc and Battlemage are not very competitive designs with AMD going off die size and transistor count compared to the performance numbers they're posting. Battlemage pricing however is quite good on price to performance, but again suffers from efficiency where AMD has them beat by quite a margin.