←back to thread

LLM Inevitabilism

(tomrenner.com)
1613 points SwoopsFromAbove | 1 comments | | HN request time: 0.395s | source
Show context
Boristoledano ◴[] No.44569257[source]
Disclaimer - I am building an AI web retriever (Linkup.so) so I have a natural bias -

LLMs aren’t just a better Google, they’re a redefinition of search itself.

Traditional search is an app: you type, scroll through ads and 10 blue links, and dig for context. That model worked when the web was smaller, but now it’s overwhelming.

LLMs shift search to an infrastructure, a way to get contextualized, synthesized answers directly, tailored to your specific need. Yes, they can hallucinate, but so can the web. It’s not about replacing Google—it’s about replacing the experience of searching (actually they probably will less and less 'experience' of searching)

replies(1): >>44569444 #
1. pickledoyster ◴[] No.44569444[source]
I believe there are some debatable assumptions baked into your comment, so I have to ask. Do you believe that the entirety of all possible knowledge ("answers") is already online? If not, how is new knowledge supposed to appear online: what are the incentives to put it up on the web if the last open gateways to it are killed by this LLM "experience"? And, if new information must be added continuously, how is it supposed to be vetted?

That last one is important, since you state: > That model worked when the web was smaller, but now it’s overwhelming.

Because it seems like the "experience" changes, but the underlying model of sucking up data off the web does not. If it was "overwhelming" in the past, how is it supposed to be easier now, with subsidized slop machines putting up new information full-tilt?