Are they trying to model every single atom?
Is this a case where the physicists in charge get away with programming the most inefficient models possible and then the administration simply replies "oh I guess we'll need a bigger supercomputer"
Are they trying to model every single atom?
Is this a case where the physicists in charge get away with programming the most inefficient models possible and then the administration simply replies "oh I guess we'll need a bigger supercomputer"
The alternative is to literally build and detonate a bomb to get empirical data on given design, which might have problems with replicability (important when applying the results to rest of the stockpile) or how exact the data is.
And remember that there is more than one user of every supercomputer deployed at such labs, whether it be multiple "paying" jobs like research simulations, smaller jobs run to educate, test, and optimize before running full scale work, etc.
AFAIK for considerable amount of time, supercomputers run more than one job at a time, too.
So in order to verify that the weapons are still useful and won't fail in random ways, you have to test them.
Which either involves actually exploding them (banned by various treaties that have enough weight that even USA doesn't break them), or numerical simulations.
* It's a jobs program to avoid the knowledge loss created by the end of the cold war. The US government poured a lot of money into recreating the institutional knowledge needed to build weapons (e.g. materials like FOGBANK) and it's preferred to maintain that knowledge by having people work on nuclear programs that aren't quite so objectionable as weapon design.
* It helps you better understand the existing weapons stockpiles and how they're aging.
* It's an obvious demonstration of your capabilities and funding for deterrence purposes.
* It's political posturing to have a big supercomputer and the DoE is one of the few agencies with both the means and the motivation to do so publicly. This has supposedly been a major motivator for the Chinese supercomputers.
There's all sorts of minor ancillary benefits that come out of these efforts too.
There are two kinds of targeting that can be employed in a nuclear war: counterforce and countervalue. Counterforce is targeting enemy military installations, and especially enemy nuclear installations. Countervalue is targeting civilian targets like cities and infrastructure. In an all out nuclear war counterforce targets are saturated with nuclear weapons, with each target receiving multiple strikes to hedge against the risks of weapon failure, weapon interception, and general target survival due to being in a fortified underground positions. Any weapons that are not needed for counterforce saturation strike countervalue targets. It turns out that having a yield greater than a megaton is basically just overkill for both counterforce and countervalue. If you're striking an underground military target (like a missile silo) protected by air defenses, your odds of destroying that target are higher if you use three one megaton yield weapons than if you use a single 20 megaton yield weapon. If you're striking a countervalue target, the devastation caused by a single nuclear detonation will be catastrophic enough to make optimizing for maximum damage pointless.
Thus, weapons designers started to optimize for things other than yield. Safety is a big one, an American nuclear weapon going off on US soil would have far reaching political effects and would likely cause the president to resign. Weapons must fail safely when the bomber carrying them bursts into flames on the tarmac, or when the rail carrying the bomb breaks unexpectedly. They must be resilient against both operator error and malicious sabotage. Oh, and none of these safety considerations are allowed to get in the way of the weapon detonating when it is supposed to. This is really hard to get right!
Another consideration is cost. Nuclear weapons are expensive to make, so a design that can get a high yield out of a small amount of fissile material is preferred. Maintenance, and the cost of maintenance, is also relevant. Will the weapon still work in 30 years, and how much money is required to ensure that?
The final consideration is flexibility and effectiveness. Using a megaton yield weapon on the battlefield to destroy enemy troop concentrations is not a viable tactic because your own troops would likely get caught in the strike. But lower yield weapons suitable for battlefield use (often referred to as tactical nuclear weapons) aren't useful for striking counterforce targets like missile silos. Thus, modern weapon designs are variable yield. The B83 mentioned above can be configured to detonate with a yield in the low kilotons, or up to 1.2 megatons. Thus a single B83 weapon in the US arsenal can cover multiple continencies, making it cheaper and more effective than maintaining a larger arsenal of single yield weapons. This is in addition to special purpose weapons designed to penetrate underground bunkers or destroy satellites via EMP, which have their own design considerations.
I've seen speculation that Russia's (former Soviet) nuclear weapons are so old and poorly maintained that they probably wouldn't work. Not that anyone wants to find out.