←back to thread

321 points laserduck | 2 comments | | HN request time: 0s | source
Show context
fsndz ◴[] No.42157451[source]
They want to throw LLMs at everything even if it does not make sense. Same is true for all the AI agent craze: https://medium.com/thoughts-on-machine-learning/langchains-s...
replies(10): >>42157567 #>>42157658 #>>42157733 #>>42157734 #>>42157763 #>>42157785 #>>42158142 #>>42158278 #>>42158342 #>>42158474 #
ReptileMan ◴[] No.42157658[source]
Isn't that the case with every new tech. There was a time in which people tried to cook everything in a microwave
replies(3): >>42157735 #>>42157765 #>>42158197 #
ksynwa ◴[] No.42157765[source]
Microwave sellers did not become trillion dollar companies off that hype
replies(1): >>42157784 #
ReptileMan ◴[] No.42157784[source]
Mostly because the marginal cost of microwaves was not close to zero.
replies(1): >>42157935 #
ksynwa ◴[] No.42157935[source]
Mostly because they were not making claims that sentient microwaves that would cook your food for you were just around the corner which then the most respected media outlets parroted uncritically.
replies(2): >>42158168 #>>42158427 #
1. rsynnott ◴[] No.42158168{3}[source]
I mean, they were at one point making pretty extravagant claims about microwaves, but to a less credulous audience. Trouble with LLMs is that they look like magic if you don’t look too hard, particularly to laypeople. It’s far easier to buy into a narrative that they actually _are_ magic, or will become so.
replies(1): >>42158439 #
2. lxgr ◴[] No.42158439[source]
I feel like what makes this a bit different from just regular old sufficiently advanced technology is the combination of two things:

- LLMs are extremely competent at surface-level pattern matching and manipulation of the type we'd previously assumed that only AGI would be able to do.

- A large fraction of tasks (and by extension jobs) that we used to, and largely still do, consider to be "knowledge work", i.e. requiring a high level of skill and intelligence, are in fact surface-level pattern matching and manipulation.

Reconciling these facts raises some uncomfortable implications, and calling LLMs "actually intelligent" lets us avoid these.