←back to thread

93 points rbanffy | 2 comments | | HN request time: 0s | source
Show context
pama ◴[] No.42188372[source]
Noting here that 2700 quadrillion operations per second is less than the estimated sustained throughput of productive bfloat16 compute during the training of the large llama3 models, which IIRC was about 45% of 16,000 quadrillion operations per second, ie 16k H100 in parallel at about 0.45 MFU. The compute power of national labs has fallen far behind industry in recent years.
replies(3): >>42188382 #>>42188389 #>>42188415 #
alephnerd ◴[] No.42188389[source]
Training an LLM (basically Transformers) is different workflow from Nuclear Simulations (basically Monte Carlo simulations)

There are a lot of intricates, but at a high level they require different compute approaches.

replies(3): >>42188413 #>>42188417 #>>42188497 #
pama ◴[] No.42188417[source]
Absolutely. Though the performance of El Capitain is only measured by a linpack benchmark not the actual application.
replies(1): >>42188515 #
1. pertymcpert ◴[] No.42188515{3}[source]
I thought modern supercomputers use benchmarks like HPCG instead of LINPACK?
replies(1): >>42188963 #
2. fancyfredbot ◴[] No.42188963[source]
The top 500 includes both. There is no HPCG result for El Capitan yet:

https://top500.org/lists/hpcg/2024/11/