←back to thread

Google is winning on every AI front

(www.thealgorithmicbridge.com)
993 points vinhnx | 1 comments | | HN request time: 0.279s | source
Show context
thunderbird120 ◴[] No.43661807[source]
This article doesn't mention TPUs anywhere. I don't think it's obvious for people outside of google's ecosystem just how extraordinarily good the JAX + TPU ecosystem is. Google several structural advantages over other major players, but the largest one is that they roll their own compute solution which is actually very mature and competitive. TPUs are extremely good at both training and inference[1] especially at scale. Google's ability to tailor their mature hardware to exactly what they need gives them a massive leg up on competition. AI companies fundamentally have to answer the question "what can you do that no one else can?". Google's hardware advantage provides an actual answer to that question which can't be erased the next time someone drops a new model onto huggingface.

[1]https://blog.google/products/google-cloud/ironwood-tpu-age-o...

replies(12): >>43661870 #>>43661974 #>>43663154 #>>43663455 #>>43663647 #>>43663720 #>>43663956 #>>43664320 #>>43664354 #>>43672472 #>>43673285 #>>43674134 #
mike_hearn ◴[] No.43663720[source]
TPUs aren't necessarily a pro. They go back 15 years and don't seem to have yielded any kind of durable advantage. Developing them is expensive but their architecture was often over-fit to yesterday's algorithms which is why they've been through so many redesigns. Their competitors have routinely moved much faster using CUDA.

Once the space settles down, the balance might tip towards specialized accelerators but NVIDIA has plenty of room to make specialized silicon and cut prices too. Google has still to prove that the TPU investment is worth it.

replies(4): >>43663930 #>>43664015 #>>43666501 #>>43668095 #
dgacmu ◴[] No.43663930[source]
They go back about 11 years.
replies(1): >>43664798 #
phillypham ◴[] No.43664798[source]
Depending how you count, parent comment is accurate. Hardware doesn't just appear. 4 years of planning and R&D for the first generation chip is probably right.
replies(2): >>43665189 #>>43666503 #
mike_hearn ◴[] No.43666503[source]
I was wrong, ironically because Google's AI overview says it's 15 years if you search. The article it's quoting from appears to be counting the creation of TensorFlow as an "origin".
replies(1): >>43666876 #
1. dgacmu ◴[] No.43666876[source]
That's awesome. :) and even that article is off. They probably were thinking of DistBelief, the predecessor to TF.