←back to thread

317 points laserduck | 1 comments | | HN request time: 0s | source
Show context
EgoIncarnate ◴[] No.42157406[source]
The article seems to be be based on the current limitations of LLMs. I don't think YC and other VCs are betting on what LLMs can do today, I think they are betting on what they might be able to do in the future.

As we've seen in the recent past, it's difficult to predict what the possibilities are for LLMS and what limitations will hold. Currently it seems pure scaling won't be enough, but I don't think we've reached the limits with synthetic data and reasoning.

replies(4): >>42157469 #>>42157563 #>>42157650 #>>42157754 #
kokanee ◴[] No.42157563[source]
Tomorrow, LLMs will be able to perform slightly below-average versions of whatever humans are capable of doing tomorrow. Because they work by predicting what a human would produce based on training data.
replies(2): >>42157593 #>>42158067 #
herval ◴[] No.42157593[source]
This severely discounts the fact that you’re comparing a model that _knows the average about everything_ to a single human’s capabilit. Also they can do it instantly, instead of having to coordinate many humans over long periods of time. You can’t straight up compare one LLM to one human
replies(1): >>42158301 #
namaria ◴[] No.42158301[source]
"Knows the average relationship amongst all words in the training data" ftfy
replies(1): >>42159647 #
herval ◴[] No.42159647[source]
it seems that's sufficient to do a lot of things better than the average human - including coding, writing, creating poetry, summarizing and explaining things...
replies(1): >>42159924 #
namaria ◴[] No.42159924{3}[source]
A human specialized in any of those things vastly outperforms the average human let alone an LLM.
replies(1): >>42165781 #
1. herval ◴[] No.42165781{4}[source]
You’re entirely missing the point