And to be honest car manufacturers have always had very deep links with politics.
For example:
https://electrek.co/2025/07/23/elon-musk-with-straight-face-...
https://fortune.com/2025/01/30/tesla-profits-bitcoin-crypto-...
Oh yeah, and government subsidies (making up 38% of profits in 2024)
https://news.sky.com/story/elons-playing-a-very-dangerous-ga...
I lumped government subsidies in with EV sales since they are related. Trump wiped these out.
Robotaxi and robots are the fantasy category. They are not currently income producers and may not be for years to come. His robot demos have been widely panned as fake.
Meanwhile the story of Jensen and Musk continued onward with custom chips to support FSDv1, in which J's personal delivery of the DGX1 to OpenAI served as the catalyst to the relationship, iirc.
Nvidia got extremely lucky again and again and again, and what specifically did it is that right in time non-Nvidia researchers learned to train on smaller floating point bit lengths, which Nvidia raced to support. And great, well, done! A list of ironies though ... for example it's Google Deepmind that made the Turing generation of cards viable for nvidia. However, the new floating point formats train has arrived at it's last station, the NXFP4 station. There is no FP3 and no FP2 to go to. Is there a new train to get on? I'm not aware of one.
Nvidia's argument is "Blackwell easily doubles Ada performance!" ... but that is deceptive. The actual improvement is that Blackwell NXFP4 (4-bits) is more than double Ada FP8 (8-bit) performance in ops. That's the train that's arrived at its last station. Go back further and the same is true, just with larger and larger FP formats, starting at FP32 (single precision). Aside from a small FP64 detour, and a few "oopses" in some of the format they chose turning out useless or unstable, all quickly abandoned that's the story of nvidia in ML.
Comparing, for example, FP32 you don't see big improvements: e.g. 4090: 83 FP32 TFLOPS, 5090: 104 FP32 TFLOPS. Given the power requirements involved that's actually a regression. If you're stuck at 8 bits, nvidia's story breaks down and Ada cards beat Blackwell cards in performance per watt: 4090: 5.44 Watt/FP32 TFLOP, 5090: 5.5 Watt/FP32 TFLOP. Or, FP8, same story: 4090 is 0.681 Watt/FP8 TFLOP, 5090 is 0.686 Watt/FP8 TFLOP. Now effectively the new memory still buys some improvement but not much.
Will the next generation after Blackwell, with the same floating format as the previous generation be a 10% improvement and subject to further diminishing returns and stuck there until ... well, until we find something better than silicon? I should point out 10% is generous, because for FP8, Blackwell is actually not an improvement at all over Ada, on a per-watt basis for equivalent floating point lengths.
Plus Blackwell is ahead of the competition ... but only 1 generation. If nvidia doesn't get on a new train, the next generation of AMD cards will match the current nvidia generation. Then the next TPU generation will match nvidia.