←back to thread

93 points rbanffy | 2 comments | | HN request time: 0.414s | source
Show context
pama ◴[] No.42188372[source]
Noting here that 2700 quadrillion operations per second is less than the estimated sustained throughput of productive bfloat16 compute during the training of the large llama3 models, which IIRC was about 45% of 16,000 quadrillion operations per second, ie 16k H100 in parallel at about 0.45 MFU. The compute power of national labs has fallen far behind industry in recent years.
replies(3): >>42188382 #>>42188389 #>>42188415 #
alephnerd ◴[] No.42188389[source]
Training an LLM (basically Transformers) is different workflow from Nuclear Simulations (basically Monte Carlo simulations)

There are a lot of intricates, but at a high level they require different compute approaches.

replies(3): >>42188413 #>>42188417 #>>42188497 #
1. Koshkin ◴[] No.42188497[source]
This is about the raw compute, no matter the workflow.
replies(1): >>42193796 #
2. alephnerd ◴[] No.42193796[source]
It isn't. I recommend reading u/pertymcpert's response.