←back to thread

322 points laserduck | 1 comments | | HN request time: 0.21s | source
Show context
EgoIncarnate ◴[] No.42157406[source]
The article seems to be be based on the current limitations of LLMs. I don't think YC and other VCs are betting on what LLMs can do today, I think they are betting on what they might be able to do in the future.

As we've seen in the recent past, it's difficult to predict what the possibilities are for LLMS and what limitations will hold. Currently it seems pure scaling won't be enough, but I don't think we've reached the limits with synthetic data and reasoning.

replies(4): >>42157469 #>>42157563 #>>42157650 #>>42157754 #
kokanee ◴[] No.42157563[source]
Tomorrow, LLMs will be able to perform slightly below-average versions of whatever humans are capable of doing tomorrow. Because they work by predicting what a human would produce based on training data.
replies(2): >>42157593 #>>42158067 #
1. steveBK123 ◴[] No.42158067[source]
It's worth considering

1) all the domains there is no training data

Many professions are far less digital than software, protect IP more, and are much more akin to an apprenticeship system.

2) the adaptability of humans in learning vs any AI

Think about how many years we have been trying to train cars to drive, but humans do it with a 50 hours training course.

3) humans ability to innovate vs AIs ability to replicate

A lot of creative work is adaptation, but humans do far more than that in synthesizing different ideas to create completely new works. Could an LLM produce the 37th Marvel movie? Yes probably. Could an LLM create.. Inception? Probably not.