←back to thread

387 points reaperducer | 9 comments | | HN request time: 0.005s | source | bottom
Show context
SubiculumCode ◴[] No.45772210[source]
Given that AI is a national security matter now, I'd expect the U.S.A to step in and rescue certain companies in the event of a crash. However, I'd give higher chances to NVIDIA than OpenAI. Weights are easily transferrable and the expertise is in the engineers, but ability to continue making advanced chips is not as easily transferred.
replies(4): >>45772241 #>>45772328 #>>45772343 #>>45772651 #
embedding-shape ◴[] No.45772241[source]
Why is ML knowledge "in the engineers" while chip manufacturing apparently sits in the company/hardware/something else than the engineers/humans?
replies(6): >>45772325 #>>45772346 #>>45772355 #>>45772369 #>>45772507 #>>45772729 #
NBJack ◴[] No.45772325[source]
Read up a bit on the effort needed to get a fab going, and the yield rates. While engineers are crucial in the setup, the fab itself is not as 'fungible' as the employees involved.

I can spin up a strong ML team through hiring in probably 6-12 months with the right funding. Building a chip fab and getting it to a sensible yield would take 3-5 years, significantly more funding, strong supply lines, etc.

replies(5): >>45772443 #>>45772496 #>>45772509 #>>45772514 #>>45773390 #
1. embedding-shape ◴[] No.45773390[source]
> I can spin up a strong ML team through hiring in probably 6-12 months with the right funding

Not sure what to call this except "HN hubris" or something.

There are hundreds of companies who thought (and still think) the exact same thing, and even after 24 months or more of "the right funding" they still haven't delivered the results.

I think you're misunderstanding how difficult all of this is, if you think it's merely a money problem. Otherwise we'd see SOTA models from new groups every month, which we obviously aren't, we have a few big labs iteratively progressing SOTA, with some upstarts appearing sometimes (DeepSeek, Kimi et al) but it isn't as easy as you're trying to make it out to be.

replies(3): >>45773610 #>>45773835 #>>45774090 #
2. whimsicalism ◴[] No.45773610[source]
There’s a lot in LLM training that is pretty commodity at this point. The difficulty is in data - and a large part of why it has gotten more challenging is simply that some of the best sources of data have locked down against scraping post-2022 and it is less permissible to use copyrighted data than the “move fast and break things” pre-2023 era.

As you mentioned, multiple no name chinese companies have done it and published many of their results. There is a commodity recipe for dense transformer training. The difference between Chinese and US is that they have less data restrictions.

I think people overindex on the Meta example. It’s hard to fully understand why Meta/llama have failed as hard as they have - but they are an outlier case. Microsoft AI only just started their efforts in earnest and are already beating Meta shockingly.

3. marcyb5st ◴[] No.45773835[source]
Fully agree. I also think we are deep into the diminishing returns territory.

If I have to guess OAI and others pay top dollars for talent that has a higher probability of discovering the next "attention" mechanism and investors are betting this is coming soon (hence the hige capitalizations and willing to loive with 11B losses/quarter). If they lose patience in throwing money at the problem I see only few players remaining in the race because they have other revenue streams

4. noosphr ◴[] No.45774090[source]
>Otherwise we'd see SOTA models from new groups every month

We do.

It's just that startups don't go after the frontier models but niche spaces which are under served and can be explored with a few million in hardware.

Just like how open AI made gpt2 before they made gpt3.

replies(1): >>45774208 #
5. embedding-shape ◴[] No.45774208[source]
> We do.

> It's just that startups don't go after the frontier models but niche spaces

But both of "New SOTA models every month" and "Startups don't go for SOTA" cannot be true at the same time. Either we get new SOTA models from new groups every month (not true today at least) or we don't, maybe because the labs are focusing on non-SOTA instead.

replies(1): >>45774513 #
6. noosphr ◴[] No.45774513{3}[source]
State of the art doesn't mean frontier.
replies(1): >>45774746 #
7. embedding-shape ◴[] No.45774746{4}[source]
I've always taken that term literally, basically "top of the top". If you're not getting the best responses from that LLM, then it's not "top of the top" anymore, regardless of size.

Then something could be "SOTA in it's class" I suppose, but personally that's less interesting and also not what the parent commentator claimed, which was basically "anyone with money can get SOTA models up and running".

Edit: Wikipedia seems to agree with me too:

> The state of the art (SOTA or SotA, sometimes cutting edge, leading edge, or bleeding edge) refers to the highest level of general development, as of a device, technique, or scientific field achieved at a particular time

I haven't heard of anyone using SOTA to not mean "at the front of the pack", but maybe people outside of ML use the word differently.

replies(1): >>45776117 #
8. noosphr ◴[] No.45776117{5}[source]
A sota decoder model is a bigger deal than yet another trillion parameter encoder only model trained on benchmarks.

I don't get why you think that the only way that you can beat the big guys is by having more parameters than them.

replies(1): >>45777145 #
9. embedding-shape ◴[] No.45777145{6}[source]
> I don't get why you think that the only way that you can beat the big guys is by having more parameters than them.

Yeah, and I don't understand why people have to argue against some point others haven't made, kind of makes it less fun to participate in any discussions.

Whatever gets the best responses (no matter parameter size, specific architecture, addition of other things) is what I'd consider SOTA, then I guess you can go by your own definition.