Most active commenters
  • herval(3)
  • namaria(3)

←back to thread

317 points laserduck | 19 comments | | HN request time: 0.855s | source | bottom
1. EgoIncarnate ◴[] No.42157406[source]
The article seems to be be based on the current limitations of LLMs. I don't think YC and other VCs are betting on what LLMs can do today, I think they are betting on what they might be able to do in the future.

As we've seen in the recent past, it's difficult to predict what the possibilities are for LLMS and what limitations will hold. Currently it seems pure scaling won't be enough, but I don't think we've reached the limits with synthetic data and reasoning.

replies(4): >>42157469 #>>42157563 #>>42157650 #>>42157754 #
2. DeathArrow ◴[] No.42157469[source]
>The article seems to be be based on the current limitations of LLMs. I don't think YC and other VCs are betting on what LLMs can do today, I think they are betting on what they might be able to do in the future.

Do we know what LLMs will be able to do in the future? And even if we know, the startups have to work with what they have now, until that future comes. The article states that there's not much to work with.

replies(1): >>42157664 #
3. kokanee ◴[] No.42157563[source]
Tomorrow, LLMs will be able to perform slightly below-average versions of whatever humans are capable of doing tomorrow. Because they work by predicting what a human would produce based on training data.
replies(2): >>42157593 #>>42158067 #
4. herval ◴[] No.42157593[source]
This severely discounts the fact that you’re comparing a model that _knows the average about everything_ to a single human’s capabilit. Also they can do it instantly, instead of having to coordinate many humans over long periods of time. You can’t straight up compare one LLM to one human
replies(1): >>42158301 #
5. KaiserPro ◴[] No.42157650[source]
> I think they are betting on what they might be able to do in the future.

Yeah, blind hope and a bit of smoke and lighting.

> but I don't think we've reached the limits with synthetic data

Synthetic data, at least for visual stuff can, in some cases provide the majority of training data. For $work, we can have say 100k video sequences to train a model, they can then be fine tuned on say 2k real videos. That gets it to be slightly under the same quality as if it was train on pure real video.

So I'm not that hopeful that synthetic data will provide a breakthrough.

I think the current architecture of LLMs are the limitation. They are fundamentally a sequence machine and are not capable of short, or medium term learning. context windows kinda makes up for that, but it doesn't alter the starting state of the model.

6. brookst ◴[] No.42157664[source]
Show me a successful startup that was predicated on the tech they’re working with not advancing?
replies(4): >>42157914 #>>42158080 #>>42158193 #>>42159132 #
7. layer8 ◴[] No.42157754[source]
You could replace “LLM” in your comment with lots of other technologies. Why bet on LLMs in particular to escape their limitations in the near term?
replies(1): >>42157912 #
8. samatman ◴[] No.42157912[source]
Because YCombinator is all about r-selecting startup ideas, and making it back on a few of them generating totally outsized upside.

I think that LLMs are plateauing, but I'm less confident that this necessarily means the capabilities we're using LLMs for right now will also plateau. That is to say it's distinctly possible that all the talent and money sloshing around right now will line up a new breakthrough architecture in time to keep capabilities marching forward at a good pace.

But if I had $100 million, and could bet $200 thousand that someone can make me billions on machine learning chip design or whatever, I'd probably entertain that bet. It's a numbers game.

replies(1): >>42158286 #
9. jeltz ◴[] No.42157914{3}[source]
Most? I can list tens of them easily. For example what advancements were required for Slack to be successful? Or Spotify (they got more successful due to smartphones and cheaper bandwidth but the business was solid before that)? Or Shopify?
replies(1): >>42158202 #
10. steveBK123 ◴[] No.42158067[source]
It's worth considering

1) all the domains there is no training data

Many professions are far less digital than software, protect IP more, and are much more akin to an apprenticeship system.

2) the adaptability of humans in learning vs any AI

Think about how many years we have been trying to train cars to drive, but humans do it with a 50 hours training course.

3) humans ability to innovate vs AIs ability to replicate

A lot of creative work is adaptation, but humans do far more than that in synthesizing different ideas to create completely new works. Could an LLM produce the 37th Marvel movie? Yes probably. Could an LLM create.. Inception? Probably not.

11. talldayo ◴[] No.42158080{3}[source]
Every single software service that has ever provided an Android or iOS application, for starters.
12. rsynnott ◴[] No.42158193{3}[source]
Most successful startups were able to make the thing that they wanted to make, as a startup, with existing tech. It might have a limited market that was expected to become less limited (a web app in 1996, say), but it was possible to make the thing.

This idea of “we’re a startup; we can’t actually make anything useful now, but once the tech we use becomes magic any day now we might be able to make something!” is basically a new phenomenon.

13. brookst ◴[] No.42158202{4}[source]
Slack bet on ubiquitous, continuous internet access. Spotify bet on bandwidth costs falling to effectively zero. Shopify bet on D2C rising because improved search engines, increased internet shopping (itself a result of several tech trends plus demographic changes).

For a counterexample I think I’d look to non-tech companies. OrangeTheory maybe?

14. namaria ◴[] No.42158286{3}[source]
> But if I had $100 million, and could bet $200 thousand that someone can make me billions on machine learning chip design or whatever, I'd probably entertain that bet. It's a numbers game.

Problem with this reasoning is twofold: start-ups will overfit to getting your money instead of creating real advances; competition amongst them will drive up the investment costs. Pretty much what has been happening.

15. namaria ◴[] No.42158301{3}[source]
"Knows the average relationship amongst all words in the training data" ftfy
replies(1): >>42159647 #
16. teamonkey ◴[] No.42159132{3}[source]
The notion of a startup gaining funding to develop a fantasy into reality is relatively new.

It used to be that startups would be created to do something different with existing tech or to commercialise a newly-discovered - but real - innovation.

17. herval ◴[] No.42159647{4}[source]
it seems that's sufficient to do a lot of things better than the average human - including coding, writing, creating poetry, summarizing and explaining things...
replies(1): >>42159924 #
18. namaria ◴[] No.42159924{5}[source]
A human specialized in any of those things vastly outperforms the average human let alone an LLM.
replies(1): >>42165781 #
19. herval ◴[] No.42165781{6}[source]
You’re entirely missing the point