Most active commenters
  • UltraSane(4)
  • kleiba(3)
  • mistrial9(3)
  • sgt101(3)
  • dekhn(3)

←back to thread

251 points slyall | 31 comments | | HN request time: 2.521s | source | bottom
1. kleiba ◴[] No.42061089[source]
> “Pre-ImageNet, people did not believe in data,” Li said in a September interview at the Computer History Museum. “Everyone was working on completely different paradigms in AI with a tiny bit of data.”

That's baloney. The old ML adage "there's no data like more data" is as old as mankind itself.

replies(6): >>42061617 #>>42061818 #>>42061987 #>>42063019 #>>42063076 #>>42064875 #
2. FrustratedMonky ◴[] No.42061617[source]
Not really. This is referring back to the 80's. People weren't even doing 'ML'. And back then people were more focused on teasing out 'laws' in as few data points as possible. The focus was more on formulas and symbols, and finding relationships between individual data points. Not the broad patterns we take for granted today.
replies(2): >>42062250 #>>42063993 #
3. evrydayhustling ◴[] No.42061818[source]
Not baloney. The culture around data in 2005-2010 -- at least / especially in academia -- was night and day to where it is today. It's not that people didn't understand that more data enabled richer + more accurate models, but that they accepted data constraints as a part of the problem setup.

Most methods research went into ways of building beliefs about a domain into models as biases, so that they could be more accurate in practice with less data. (This describes a lot of PGM work). This was partly because there was still a tug of war between CS and traditional statistics communities on ML, and the latter were trained to be obsessive about model specification.

One result was that the models that were practical for production inference were often trained to the point of diminishing returns on their specific tasks. Engineers deploying ML weren't wishing for more training instances, but better data at inference time. Models that could perform more general tasks -- like differentiating 90k object classes rather than just a few -- were barely even on most people's radar.

Perhaps folks at Google or FB at the time have a different perspective. One of the reasons I went ABD in my program was that it felt industry had access to richer data streams than academia. Fei Fei Li's insistence on building an academic computer science career around giant data sets really was ingenius, and even subversive.

replies(2): >>42062715 #>>42063187 #
4. littlestymaar ◴[] No.42061987[source]
In 2019, GPT-2 1.5B was trained on ~10B tokens.

Last week Hugging Face released SmolLM v2 1.7B trained on 11T tokens, 3 orders of magnitude more training data for the same number of tokens with almost the same architecture.

So even back in 2019 we can say we were working with a tiny amount of data compared to what is routine now.

replies(1): >>42063083 #
5. criddell ◴[] No.42062250[source]
I would say using backpropagation to train multi-layer neural networks would qualify as ML and we were definitely doing that in 80's.
replies(1): >>42062594 #
6. UltraSane ◴[] No.42062594{3}[source]
Just with tiny amounts of data.
replies(1): >>42062627 #
7. jensgk ◴[] No.42062627{4}[source]
Compared to today. We thought we used large amounts of data at the time.
replies(1): >>42062803 #
8. bsenftner ◴[] No.42062715[source]
The culture was and is skeptical in biased manners. Between '04 and '08 I worked with a group that had trained neural nets for 3D reconstruction of human heads. They were using it for prenatal diagnostics and a facial recognition pre-processor, and I was using it for creating digital doubles in VFX film making. By '08 I'd developed a system suitable for use in mobile advertising, creating ads with people in them, and 3D games with your likeness as the player. VCs thought we were frauds, and their tech advisors told them our tech was an old discredited technique that could not do what we claimed. We spoke to every VC, some of which literally kicked us out. Finally, after years of "no" that same AlexNet success begins to change minds, but now they want the tech to create porn. At that point, after years of "no" I was making children's educational media, there was no way I was gonna do porn. Plus, president of my co was a woman, famous for creating children's media. Yeah, the culture was different then, not too long ago.
replies(2): >>42062832 #>>42066509 #
9. UltraSane ◴[] No.42062803{5}[source]
"We thought we used large amounts of data at the time."

Really? Did it take at least an entire rack to store?

replies(1): >>42063257 #
10. evrydayhustling ◴[] No.42062832{3}[source]
Wow, so early for generative -- although I assume you were generating parameters that got mapped to mesh positions, rather than generating pixels?

I definitely remember that bias about neural nets, to the point of my first grad ML class having us recreate proofs that you should never need more than two hidden layers (one can pick up the thread at [1]). Of all the ideas clunking around in the AI toolbox at the time, I don't really have background on why people felt the need to kill NN with fire.

[1] https://en.wikipedia.org/wiki/Universal_approximation_theore...

replies(1): >>42064437 #
11. kleiba ◴[] No.42063019[source]
Answering to people arguing against my comment: you guys do not seem to take into account that the technical circumstances were totally different thirty, twenty or even ten years ago! People would have liked to train with more data, and there was a big interest in combining heterogeneous datasets to achieve exactly that. But one major problem was the compute! There weren't any pretrained models that you specialized in one way or the other - you always retrained from scratch. I mean, even today, who's get the capability to train a multibillion GPT from scratch? And not just retraining once a tried and trusted architecture+dataset, no, I mean as a research project trying to optimize your setup towards a certain goal.
12. kccqzy ◴[] No.42063076[source]
Pre-ImageNet was like pre-2010. Doing ML with massive data really wasn't in vogue back then.
replies(1): >>42064389 #
13. kleiba ◴[] No.42063083[source]
True. But my point is that the quote "people didn't believe in data" is not true. Back in 2019, when GPT-2 was trained, the reason they didn't use the 3T of today was not because they "didn't believe in data" - they totally would have had it been technically feasible (as in: they had that much data + the necessary compute).

The same has always been true. There has never been a stance along the lines of "ah, let's not collect more data - it's not worth it!". It's always been other reasons, typically the lack of resources.

replies(1): >>42066238 #
14. tucnak ◴[] No.42063187[source]
> they accepted data constraints as a part of the problem setup.

I've never heard this be put so succinctly! Thank you

15. jensgk ◴[] No.42063257{6}[source]
We didn't measure data size that way. At some point in the future someone would find this dialog, and think that we dont't have large amounts of data now, because we are not using entire solar systems for storage.
replies(1): >>42065235 #
16. mistrial9 ◴[] No.42063993[source]
mid-90s had neural nets, even a few popular science kinds of books on it. The common hardware was so much less capable then.
replies(1): >>42064954 #
17. mistrial9 ◴[] No.42064389[source]
except in Ivory Towers of Google + Facebook
replies(1): >>42066977 #
18. bsenftner ◴[] No.42064437{4}[source]
It was annotated face images and 3D scans of heads trained to map one to the other. After a threshold in the size of the training data, good to great results from a single photo could be had to generate the mesh 3D positions, and then again to map the photo onto the mesh surface. Do that with multiple frames, and one is firmly in the Uncanny Valley.
19. sgt101 ◴[] No.42064875[source]
It's not quite so - we couldn't handle it, and we didn't have it, so it was a bit of a none question.

I started with ML in 1994, I was in a small poor lab - so we didn't have state of the art hardware. On the other hand I think my experience is fairly representative. We worked with data sets on spark workstations that were stored in flat files and had thousands or sometimes tens of thousands of instances. We had problems keeping our data sets on the machines and often archived them to tape.

Data came from very deliberate acquisition processes. For example I remember going to a field exercise with a particular device and directing it's use over a period of days in order to collect the data that would be needed for a machine learning project.

Sometime in the 2000's data started to be generated and collected as "exhaust" from various processes. People and organisations became instrumented in the sense that their daily activities were necessarily captured digitally. For a time this data was latent, people didn't really think about using it in the way that we think about it now, but by about 2010 it was obvious that not only was this data available but we had the processing and data systems to use it effectively.

20. sgt101 ◴[] No.42064954{3}[source]
mid-60's had neural nets.

mid-90's had LeCun telling everyone that big neural nets were the future.

replies(1): >>42065537 #
21. UltraSane ◴[] No.42065235{7}[source]
Why can't you use a rack as a unit of storage at the time? Were 19" server racks not in common use yet? The storage capacity of a rack will grow over time.

my storage hierarchy goes 1) 1 storage drive 2) 1 server maxed out with the biggest storage drives available 3) 1 rack filled with servers from 2 4) 1 data center filled with racks from 3

replies(1): >>42066284 #
22. dekhn ◴[] No.42065537{4}[source]
Mid 90s I was working on neural nets and other machine learning, based on gradient descent, with manually computed derivatives, on genomic data (from what I can recall, we had no awareness of LeCun; I didnt find out about his great OCR results until much later). it worked fine and it seemed like a promising area.

My only surprise is how long it took to get to imagenet, but in retrospect, I appreciate that a number of conditions had to be met (much more data, much better algorithms, much faster computers). I also didn't recognize just how poorly MLPs were for sequence modelling, compared to RNNs and transformers.

replies(1): >>42069033 #
23. littlestymaar ◴[] No.42066238{3}[source]
> they totally would have had it been technically feasible

TinyLlama[1] has been made by an individual on their own last year, training a 1.1B model on 3T tokens with just 16 A100-40G GPUs in 90 days. It was definitely within reach of any funded org in 2019.

In 2022 (IIRC), Google released the Chinchilla paper about the compute-optimal amount of data to train a given model, for a 1B model, the value was determined to be 20B tokens, which again is 3 orders of magnitude below the current state of the art for the same class of model.

Until very recently (the first llama paper IIRC, and people noticing that the 7B model showed no sign of saturation during its already very long training) the ML community vastly underestimated the amount of training data that was needed to make a LLM perform at its potential.

[1]: https://github.com/jzhang38/TinyLlama

24. fragmede ◴[] No.42066284{8}[source]
How big is a rack in VW beetles though?

It's a terrible measurement because it's an irrelevant detail about how their data is stored that no one actually knows if your data is being stored in a proprietary cloud except for people that work there on that team.

So while someone could say they used a 10 TiB data set, or 10T parameters, how many "racks" of AWS S3 that is, is not known outside of Amazon.

replies(1): >>42072934 #
25. philipkglass ◴[] No.42066509{3}[source]
Who's offering VC money for neural network porn technology? As far as I can tell, there is huge organic demand for this but prospective users are mostly cheapskates and the area is rife with reputational problems, app store barriers, payment processor barriers, and regulatory barriers. In practice I have only ever seen investors scared off by hints that a technology/platform would be well matched to adult entertainment.
26. disgruntledphd2 ◴[] No.42066977{3}[source]
Even then maybe Google but probably not Facebook. Ads used ML but there wasn't that much of it in feed. Like, there were a bunch of CV projects that I saw in 2013 that didn't use NNs. Three years later, otoh you couldn't find a devserver without tripping over an NN along the way.
27. sgt101 ◴[] No.42069033{5}[source]
I'm so out of things ! What do you mean manually computed derivatives?
replies(2): >>42071400 #>>42072510 #
28. mistrial9 ◴[] No.42071400{6}[source]
it means that code has to read values from each layer and do some summarizing math, instead of passing layer blocks to a graphics card in one primitive operation implemented on the card.
replies(1): >>42072523 #
29. dekhn ◴[] No.42072510{6}[source]
I mean we didn't know autodifferentiation was a thing, so we (my advisor, not me) analytically solved our loss function for its partial derivatives. After I wrote up my thesis, I spent a lot of time learning mathematica and advanced calculus.

I haven't invested the time to take the loss function from our paper and implement in a modern framework, but IIUC, I wouldn't need to provide the derivatives manually. That would be a satisfying outcome (indicating I had wasted a lot of effort learning math that simply wasn't necessary, because somebody had automated it better than I could do manually, in a way I can understand more easily).

30. dekhn ◴[] No.42072523{7}[source]
No. I should have said "determined the partial derivatives of the weights with respect to the variables analytically". We didn't have layers- the whole architecture was a truly crazy combination of dynamic programming with multiple different matrices and a loss function that combined many different types of evidence. AFAICT nobody does any of this any more for finding genes. We just take enormous amounts of genetic data and run an autoencoder or a sequence model over it.
31. UltraSane ◴[] No.42072934{9}[source]
a 42U 19" inch rack is an industry standard. If you actually work on the physical infrastructure of data centers it is most CERTAINLY NOT an irrelevant detail.

And whether your data can fit on a single server, single rack, or many racks will drastically affect how you design the infrastructure.