←back to thread

317 points laserduck | 3 comments | | HN request time: 0.932s | source
1. zachglabman ◴[] No.42165321[source]
LLMs are wrong for most things imo. LLMs are great conversational assistants, but there is very little linguistic rigor to them, if any. They have almost no generalization ability, and anecdotally they fall for the same syntactic pitfalls they've fallen for since BERT. Models have gotten so good at predicting this n-dimensional "function" that sounds like human speech, we're getting distracted from seeing their actual purpose and trying to apply them to all sorts of problems that rely on more than text-based training data.

Language is cool and immensely useful. LLMs, however, are fundamentally flawed from their basic assumptions about how language works. The distribution hypothesis is good for paraphrasing and summarization, but pretty atrocious for real reasoning. The concept of an idea living in a semantic "space" is incompatible with simple vector spaces, and we are starting to see this actually matter in minutia with scaling laws coming into play. Chip design is a great example of where we cannot rely on language alone to solve all our problems.

I hope to be proven wrong, but still not sold on AGI being within reach. We'll probably need some pretty significant advancements in large quantitative models, multi-modal models and smaller, composable models of all types before we see AGI

replies(1): >>42165617 #
2. nuancebydefault ◴[] No.42165617[source]
The 2 first paragraphs are in contradiction with my results with working with LLMs. There is definitely some form of reasoning that has emerged. Some people will still find it not convincing enough to be called reasoning, but that just a quantitative limitation at the moment.

With respect to AGI in its broadest sense: indeed it is not in reach. I think that is for the better!

replies(1): >>42174265 #
3. zachglabman ◴[] No.42174265[source]
If a transformer had infinite data and parameters, I'm sure it could simulate human reasoning to a high degree. Humans don't work that way, so we may need to create a more general definition for artificial reasoning