←back to thread

336 points mooreds | 3 comments | | HN request time: 0.001s | source
Show context
raspasov ◴[] No.44485275[source]
Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

replies(12): >>44485480 #>>44485483 #>>44485524 #>>44485758 #>>44485846 #>>44485900 #>>44485998 #>>44486105 #>>44486138 #>>44486182 #>>44486682 #>>44493526 #
ninetyninenine ◴[] No.44485846[source]
Alright, let’s get this straight.

You’ve got people foaming at the mouth anytime someone mentions AGI, like it’s some kind of cult prophecy. “Oh it’s poorly defined, it’s not around the corner, everyone talking about it is selling snake oil.” Give me a break. You don’t need a perfect definition to recognize that something big is happening. You just need eyes, ears, and a functioning brain stem.

Who cares if AGI isn’t five minutes away. That’s not the point. The point is we’ve built the closest thing to a machine that actually gets what we’re saying. That alone is insane. You type in a paragraph about your childhood trauma and it gives you back something more coherent than your therapist. You ask it to summarize a court ruling and it doesn’t need to check Wikipedia first. It remembers context. It adjusts to tone. It knows when you’re being sarcastic. You think that’s just “autocomplete”? That’s not autocomplete, that’s comprehension.

And the logic complaints, yeah, it screws up sometimes. So do you. So does your GPS, your doctor, your brain when you’re tired. You want flawless logic? Go build a calculator and stay out of adult conversations. This thing is learning from trillions of words and still does better than half the blowhards on HN. It doesn’t need to be perfect. It needs to be useful, and it already is.

And don’t give me that “it sounds profound but it’s really just crap” line. That’s 90 percent of academia. That’s every selfhelp book, every political speech, every guy with a podcast and a ring light. If sounding smarter than you while being wrong disqualifies a thing, then we better shut down half the planet.

Look, you’re not mad because it’s dumb. You’re mad because it’s not that dumb. It’s close. Close enough to feel threatening. Close enough to replace people who’ve been coasting on sounding smart instead of actually being smart. That’s what this is really about. Ego. Fear. Control.

So yeah, maybe it’s not AGI yet. But it’s smarter than the guy next to you at work. And he’s got a pension.

replies(2): >>44486895 #>>44486925 #
1. raspasov ◴[] No.44486925[source]
There's a lot in here. I agree with a lot of it.

However, you've shifted the goal post from AGI to being useful in specific scenarios. I have no problem with that statement. It can write decent unit tests and even find hard-to-spot, trivial mistakes in code. But again, why can it do that? Because a version of that same mistake is in the enormous data set. It's a fantastic search engine!

Yet, it is not AGI.

replies(1): >>44487268 #
2. ninetyninenine ◴[] No.44487268[source]
You say it's just a fancy search engine. Great. You know what else is a fancy search engine? Your brain. You think you're coming up with original thoughts every time you open your mouth? No. You're regurgitating every book, every conversation, every screw-up you've ever witnessed. The brain is pattern matching with hormones. That’s it.

Now you say I'm moving the goalposts. No, I’m knocking down the imaginary ones. Because this whole AGI debate has turned into a religion. “Oh it’s not AGI unless it can feel sadness, do backflips, and write a symphony from scratch.” Get over yourself. We don’t even agree on what intelligence is. Half the country thinks astrology is real and you’re here demanding philosophical purity from a machine that can debug code, explain calculus, and speak five languages at once? What are we doing?

You admit it’s useful. You admit it catches subtle bugs, writes code, gives explanations. But then you throw your hands up and go, “Yeah, but that’s just memorization.” You mean like literally how humans learn everything? You think Einstein invented relativity in a vacuum? No. He stood on Newton, who stood on Galileo, who probably stood on a guy who thought the stars were angry gods. It’s all remixing. Intelligence isn’t starting from zero. It’s doing something new with what you’ve seen.

So what if the model’s drawing from a giant dataset? That’s not a bug. That’s the point. It’s not pulling one answer like a Google search. It’s constructing patterns, responding in context, and holding a conversation that feels coherent. If a human did that, we’d say they’re smart. But if a model does it, suddenly it’s “just autocomplete.”

You know who moves the goalposts? The people who can’t stand that this thing is creeping into their lane. So yeah, maybe it's not AGI in your perfectly polished textbook sense. But it's the first thing that makes the question real. And if you don’t see that, maybe you’re not arguing from logic. Maybe you’re just pissed.

replies(1): >>44487751 #
3. raspasov ◴[] No.44487751[source]
Of course, I have no original thoughts. Something is not created out of nothing. That would be Astrology, perhaps :).

But the difference between a human and an LLM is that humans can go out in the world and test their hypothesis. Literally every second is an interaction with a feedback loop. Even typing this response to you right now. LLMs currently have to wait for the next 6-month retraining cycle. I am not saying that AGI cannot be created. In theory it can be but we are definitely milking the crap out of a local maximum we've currently found which is definitely not the final answer.

PS Also, when I said "it can spot mistakes," I probably gave it too much credit. In one case, it presented several potential issues, and I happened to notice that one of them was a problem. In many cases, the LLM suggests issues that are either hypothetical or nonexistent.