Most active commenters
  • almostgotcaught(5)
  • llm_trw(5)
  • ipsum2(3)
  • danpalmer(3)
  • nextos(3)
  • (3)
  • davrosthedalek(3)

←back to thread

195 points rbanffy | 53 comments | | HN request time: 1.16s | source | bottom
1. ipsum2 ◴[] No.42176882[source]
As someone who worked in the ML infra space: Google, Meta, XAI, Oracle, Microsoft, Amazon have clusters that perform better than the highest performing cluster on Top500. They don't submit because there's no reason to, and some want to keep the size of their clusters a secret. They're all running Nvidia. (Except Google, who uses TPUs and Nvidia.)

> El Capitan – we don’t yet know how big of a portion yet as we write this – with 43,808 of AMD’s “Antares-A” Instinct MI300A devices

By comparison XAI announced that they have 100k H100s. MI300A and H100s have roughly similar performance. Meta says they're training on more than 100k H100s for Llama-4, and have the equivalent of 600k H100s worth of compute. (Note that compute and networking can be orthogonal).

Also, Nvidia B200s are rolling out now. They offer 2-3x the performance of H100s.

replies(10): >>42176948 #>>42177276 #>>42177493 #>>42177581 #>>42177611 #>>42177644 #>>42178095 #>>42178187 #>>42178825 #>>42179038 #
2. danpalmer ◴[] No.42176948[source]
Google is running its own TPU hardware for internal workloads. I believe Nvidia is just resold for cloud customers.
replies(3): >>42177022 #>>42178089 #>>42178914 #
3. ipsum2 ◴[] No.42177022[source]
Nvidia GPUs are also used for inference on Google products. It just depends on availability.
replies(1): >>42177620 #
4. pclmulqdq ◴[] No.42177276[source]
B200s have an incremental increase in FP64 and FP32 performance over H100s. That is the number format that HPC people care about.

The MI300A can get to 150% the FP64 peak performance that B200 devices can get, although AMD GPUs have historically underperformed their spec more than Nvidia GPUs. It's possible that B200 devices are actually behind for HPC.

replies(1): >>42177364 #
5. cayleyh ◴[] No.42177364[source]
Top line comparison numbers for reference: https://www.theregister.com/2024/03/18/nvidia_turns_up_the_a...

It does seem like Nvidia is prioritizing int8 / fp8 performance over FP64, which given the current state of the ML marketplace is a great idea.

replies(1): >>42178086 #
6. zekrioca ◴[] No.42177493[source]
The Top500 list is useful as a public, standardized baseline that is straightforward, with a predicted periodicity for more than 30 years. It is trickier to compare cloud infras due to their heterogeneity, fast pace, and more importantly, due the lack of standardized tests, although the MLCommons [1] have been very keen on helping with that.

[1] https://mlcommons.org/datasets/

replies(1): >>42178610 #
7. almostgotcaught ◴[] No.42177581[source]
Ya exactly - no one cares about top500 outside of academia (literally have never heard it come up at work). So this is like the gold star (participation award) of DCGPU competition.
8. maratc ◴[] No.42177611[source]
> Nvidia B200s ... offer 2-3x the performance of H100s

For ML, not for HPC. ML and HPC are two completely different, only loosely related fields.

ML tasks are doing great with low precision, 16 and 8 bit precision is fine, arguably good results can be achieved even with 4 bit precision [0][1]. That won't do for HPC tasks, like predicting global weather, computational biology, etc. -- one would need 64 to 128 bit precision for that.

Nvidia needs to decide how to divide the billions of transistors on their new silicon. Greatly oversimplifying, they can choose to make one of the following:

  *  Card A with *n* FP64 cores, or 
  *  Card B with *2n* FP32 cores, or 
  *  Card C with *4n* FP16 cores, or 
  *  Card D with *8n* FP8 cores, or (theoretically)
  *  Card E with *16n* FP4 cores (not sure if FP4 is a thing). 
Card A would give HPC guys n usable cores, and it would give ML guys n usable cores. On the other end, Card E would give ML guys 16n usable cores (and zero usable cores for HPC guys). It's no wonder that HPC crowd wants Nvidia to produce Card A, while ML crowd wants Nvidia to produce Card E. Given that all the hype and the money are currently with the ML guys (and $NVDA reflects that), Nvidia will make a combination of different cores that is much much closer to Card E than it is to Card A.

Their new offerings are arguably worse than their older offerings for HPC tasks, and the feeling with the HPC crowd is that "Nvidia and AMD are in the process of abandoning this market".

[0] https://papers.nips.cc/paper/2020/file/13b919438259814cd5be8...

[1] https://arxiv.org/abs/2212.09720

replies(5): >>42178357 #>>42178713 #>>42179347 #>>42180055 #>>42185923 #
9. danpalmer ◴[] No.42177620{3}[source]
Interesting, do you have a source for this? I've not been able to find one.
replies(1): >>42178060 #
10. formerly_proven ◴[] No.42177644[source]
China has been absent from TOP500 for years as well.
11. nextos ◴[] No.42178060{4}[source]
GCP plans offer access to high-end NVIDIA GPUs, as well as TPUs. I thought Google products use the same pool of resources that is also resold to customers?
replies(2): >>42179122 #>>42182322 #
12. nextos ◴[] No.42178086{3}[source]
MI300 also have decent performance in FP16 (~108 TFLOPS). Not as good as NVIDIA, but it's getting there. Anyone has experience using these on JAX? Support is said to be decent, but no idea if it's good enough for research-oriented tasks, i.e. stable enough for training and inference.
13. okdood64 ◴[] No.42178089[source]
Huh? https://cloud.google.com/tpu/docs/intro-to-tpu
14. zitterbewegung ◴[] No.42178095[source]
Generally HPC compute has lower margins similar to consoles. It makes sense that AMD would fight for that contract more than NVIDIA similar to IBM stopped doing this. Its sort of comparing Apples to Raspberry Pis.
replies(1): >>42178119 #
15. geerlingguy ◴[] No.42178119[source]
Hey now I compare Apples to Raspberry Pi's regularly :)
16. llm_trw ◴[] No.42178187[source]
A cluster is not a super computer.

The whole point of a super computer is that it act as much as a single machine as it is possible while a cluster is a soup of nearly independent machines.

replies(3): >>42178234 #>>42179465 #>>42180953 #
17. kristjansson ◴[] No.42178234[source]
> soup of nearly independent machines

that does a serious disservice to hyperscaler clusters.

replies(1): >>42178817 #
18. touisteur ◴[] No.42178357[source]
With the B100 somehow announced to have lower scalar FP64 throughput than the H100 (did they remove the DP tensor cores ?), one will have to rely on Ozaki schemes (dgemm with int8 tensor cores) and lots of the recent body of work on mixed-precision linear algebra show there's a lot of computing power to be harnessed from Tensor Cores. One of the problems of HPC now is a level of ossification of some codebases (or the lack of availability of porting/coding/optimizing people). You shouldn't have to rewrite everything every 5 years but the hardware constructors go where they go and we still haven't found the right level of abstraction to avoid big porting efforts.
19. makeitdouble ◴[] No.42178610[source]
If I understand your comment correctly, we're taking a stable but not that relevant metric, because the real players of the market are too secretive, fast and far ahead to allow for simple comparisons.

From a distance, it kinda sounds like listening to kids brag about their allowance while the adults don't want to talk about their salary, and try to draw wider conclusions from there.

replies(2): >>42178935 #>>42179876 #
20. ipsum2 ◴[] No.42178713[source]
Yes, that's a great point that I missed. From anecdotal evidence, it seems more people are using supercomputers for ML use cases, that would have been traditionally done by HPC. (eg training models for weather forecasts)
21. llm_trw ◴[] No.42178817{3}[source]
Sure but it's closer to the truth than saying they have similar or more raw compute than a super computer.
22. lobochrome ◴[] No.42178825[source]
B200 is very much not rolling out because NVIDIA, after the respin, doesn't have the thermals under control (yet).

Your other points may be valid.

replies(1): >>42178924 #
23. deeth_starr_v ◴[] No.42178914[source]
Not true. Apple trained some models on their TPU
replies(1): >>42178931 #
24. deeth_starr_v ◴[] No.42178924[source]
Source?
replies(1): >>42183070 #
25. danpalmer ◴[] No.42178931{3}[source]
Apologies, to be clear what I meant was that to my knowledge Google doesn't use GPUs for it's own stuff, but does sell both TPUs and GPUs to others on Cloud.

Also, to be clear, I have no internal info about this, I'm going based on external stuff I've seen.

26. wbl ◴[] No.42178935{3}[source]
Even the DoE posts top 500 results when they commission a supercomputer.
replies(1): >>42179577 #
27. ◴[] No.42179038[source]
28. eitally ◴[] No.42179122{5}[source]
Only some Google products. Most still run on internal platforms, not GCP.
replies(1): >>42179434 #
29. layla5alive ◴[] No.42179347[source]
You've heard of SIMD - it's possible to do both, in terms of throughput, with instruction/scheduler/port complexity overhead of course.
30. nextos ◴[] No.42179434{6}[source]
OK, interesting, so there is some dogfooding, but it's not complete.
31. almostgotcaught ◴[] No.42179465[source]
i wish people wouldn't make stuff up just to sound cool.

like do you have actual experience with gov/edu HPC? i doubt it because you couldn't be more wrong - lab HPC clusters are just very very poorly (relative to FAANG) strewn together nodes. there is absolutely no sense in which they are "one single machine" (nothing is "abstracted over" except NFS).

what you're saying is trivially false because no one ever requests all the machines at once (except when they're running linpack to produce top500 numbers). the rest of the time the workflow is exactly like in any industrial cluster: request some machines (through slurm), get those machines, run your job (hopefully you distributed the job across the nodes correctly), release those machines. if i still had my account i could tell you literally how many different jobs are running right now on polaris.

replies(1): >>42179716 #
32. makeitdouble ◴[] No.42179577{4}[source]
DoE has absolutely no incentive (nor need, I'd argue) to compare their supercomputers to commercially owned data center operations though.

Comparing their crazy expensive custom built HPC to massive arrays of customer grade hardware doesn't bring them additional funds, nor help them more PR wise than being the owner of the fastest individual clusters.

Being at the top of some heap is visibly one of their goal:

https://www.energy.gov/science/high-performance-computing

replies(1): >>42180151 #
33. bocklund ◴[] No.42179716{3}[source]
Actually, LLNL (the site of El Capitan) has a process for requesting Dedicated Application Time (a DAT) where you use up to a whole machine, usually over a weekend. They occur fairly regularly. Mostly it's lots of individual users and jobs, like you said though.
replies(1): >>42180282 #
34. zekrioca ◴[] No.42179876{3}[source]
It seems there was a misunderstanding, as I haven't made any value judgment about LINPACK.

Yes, LINPACK is indeed "old" with a heavy focus on compute power. However, its simplicity serves as a reliable baseline for the types of workflows that supercomputers are designed to handle. Also, at their core, most AI workloads perform essentially the same operations as HPC, albeit with less stability—which, I admit, is a feature, but likely the reason AI-focused systems do not prioritize LINPACK as much.

I am simply saying that any useful metric needs to not only be "stable", but also simple to grasp. Take Green500, probably a significant benchmark for understanding how algorithms consume power, but "too complex" to explain: yet, many cloud providers with their AI supercomputers avoid competing against HPC supercomputers in this domain.

This avoidance isn’t necessarily due to secrecy but rather inefficiencies inherent to cloud systems. Consider PUE (Power Usage Effectiveness)—a highly misleading metric that cloud providers frequently tout. PUE can easily be manipulated, especially with the use of liquid cooling, which is why optimizing for it has become a major factor contributing to water disruptions in several large cities worldwide.

35. dragontamer ◴[] No.42180055[source]
Doesn't multiply area scale at O(n^2 * log(n)) ?? (At least, I'm pretty sure the Wallace Tree Multiplier circuit is somewhere in that order).

So a 64-bit multiplier is something like 32x more area than a 16-bit multiplier.

But what you say is correct for RAM area or the number of bits you need for register space. So taken holistically, it's difficult to say...

Okay, 64-bit FP is only like 53-bits and 16-bit FP is actually like 11 bits. But you know what I mean. I'm still doing quick napkin math here, nothing formal.

-------

We can ignore adders and subtractor circuits because they are so small. Division is often implemented as reciprocal followed by multiplication circuits for floating point (true division is very expensive).

36. khm ◴[] No.42180151{5}[source]
DOE clusters are also massive arrays of customer grade hardware. Private cloud can only keep up in low precision work, and that is why they're still playing with remote memory access over TCP, because it's good enough for web and ML.

High precision HPC exists in the private cloud, but you only hear "we don't want to embarrass others" excuses because otherwise you would be able to calculate the cost.

On prem HPC is still very, very much cheaper than hiring out.

37. almostgotcaught ◴[] No.42180282{4}[source]
> where you use up to a whole machine

i mean rick stevens et al can grab all of polaris too but even so - it's just a bunch of nodes and you're responsible for distributing your work across those nodes efficiently. there's no sense in which it's a "single computer" in any way, shape or form.

replies(1): >>42180326 #
38. llm_trw ◴[] No.42180326{5}[source]
The same way that you're responsible for distributing your single threaded code between cores on your desktop.
replies(2): >>42182694 #>>42183140 #
39. bravetraveler ◴[] No.42180953[source]
Put slurm on it, bam. Supercomputer.
40. ◴[] No.42182322{5}[source]
41. davrosthedalek ◴[] No.42182694{6}[source]
No. Threads run typically in the same address space. HPC processes on different nodes typically do not.
replies(1): >>42183490 #
42. lobochrome ◴[] No.42183070{3}[source]
Reuters!
replies(1): >>42188338 #
43. almostgotcaught ◴[] No.42183140{6}[source]
Tell me you've never run a distributed workload without telling me. You realize if what you were saying were true, HPC would be trivial. In fact it takes a whole lot of PhDs to manage the added complexity because it's not just a "single computer".
replies(1): >>42183388 #
44. llm_trw ◴[] No.42183388{7}[source]
If you think parallelizing single threaded code is trivial ... well there's nothing else to say really.
replies(1): >>42183483 #
45. almostgotcaught ◴[] No.42183483{8}[source]
Is there like a training program available for learning how to be this obstinate? I would love to attend so that I can win fights with my wife.
replies(1): >>42187852 #
46. llm_trw ◴[] No.42183490{7}[source]
Define address space.

Cache is not shared between cores.

HPCs just have more levels of cache.

Lest you ignore the fact that infiniband is pretty much on par with top of the line ddr speeds for the matching generation.

replies(4): >>42183627 #>>42184007 #>>42185261 #>>42185697 #
47. davrosthedalek ◴[] No.42183627{8}[source]
Really? How about: "This pointer is valid, has the same numeric value (address) and points to the same data in all threads". The point is not the latency nor bandwidth. The point is the programming/memory model. Infiniband maybe makes multiprocessing across nodes as fast as multiprocessing on a single node. But it's not multithreading.
48. imtringued ◴[] No.42184007{8}[source]
>Cache is not shared between cores.

I feel sorry for you if you believe this. It's not true physically nor is it true on the level of the cache coherence protocol nor is it true from the perspective of the operating system.

49. formerly_proven ◴[] No.42185261{8}[source]
There are four sentences in your comment.

None of them logically relate to another.

One is a question.

And the rest are wrong.

50. moralestapia ◴[] No.42185697{8}[source]
>Lest you ignore the fact that infiniband is pretty much on par with top of the line ddr speeds for the matching generation.

You can't go faster than the speed of light (yet) and traveling a few micrometers will always be much faster than traversing a room (plus routing and switching).

Many HPC tasks nowadays are memory-bound rather than CPU-bound, memory-latency-and-throughput-bound to be more precise. An actual supercomputer would be something like the Cerebras chip, a lot of the performance increase you get is due to having everything on-chip at a given time.

51. ◴[] No.42185923[source]
52. davrosthedalek ◴[] No.42187852{9}[source]
Maybe llm_trw is your wife?
53. _zoltan_ ◴[] No.42188338{4}[source]
don't spread FUD please.