Arguably the November 1996 launch of 3dfx kickstarted GPU interest and OpenGL.
After reading that, it’s hard to take author seriously on the rest of the claims.
The article is a very good historical one showing how 3 important things came together to make the current progress possible viz;
1) Geoffrey Hinton's back-propagation algorithm for deep neural networks
2) Nvidia's GPU hardware used via CUDA for AI/ML and
3) Fei-Fei Li's huge ImageNet database to train the algorithm on the hardware. This team actually used "Amazon Mechanical Turk"(AMT) to label the massive dataset of 14 million images.
Excerpts;
“Pre-ImageNet, people did not believe in data,” Li said in a September interview at the Computer History Museum. “Everyone was working on completely different paradigms in AI with a tiny bit of data.”
“That moment was pretty symbolic to the world of AI because three fundamental elements of modern AI converged for the first time,” Li said in a September interview at the Computer History Museum. “The first element was neural networks. The second element was big data, using ImageNet. And the third element was GPU computing.”