←back to thread

265 points ctoth | 6 comments | | HN request time: 1.315s | source | bottom
Show context
plaidfuji ◴[] No.43748358[source]
Gemini 2.5 Pro is certainly a tipping point for me. Previous LLMs have been very impressive, especially on coding tasks (unsurprising as the answers to these have a preponderance of publicly available data). But outside of a coding assistant, LLMs til now felt like an extra helpful and less garbage-filled Google search.

I just used 2.5 Pro to help write a large research proposal (with significant funding on the line). Without going into detail, it felt to me like the only reason it couldn’t write the entire thing itself is because I didn’t ask it to. And by “ask it”, I mean: enter into the laughably small chat box the entire grant solicitation + instructions, a paragraph of general direction for what I want to explore, and a bunch of unstructured artifacts from prior work, and turn it loose. I just wasn’t audacious enough to try that from the start.

But as the deadline approached, I got more and more unconstrained in how far back I would step and let it take the reins - doing essentially what’s described above but on isolated sections. It would do pretty ridiculously complex stuff, like generate project plans and timelines, cross reference that correctly with other sections of text, etc. I can safely say it was a 10x force multiplier, and that’s being conservative.

For scientific questions (ones that should have publicly available data, not ones relying on internal data), I have started going to 2.5 Pro over senior experts on my own team. And I’m convinced at this point if I were to connect our entire research data corpus to Gemini, that balance would shift even further. Why? Because I can trust it to be objective - not inject its own political or career goals into its answers.

I’m at the point where I feel the main thing holding back “AGI” is people’s audacity to push its limits, plus maybe context windows and compute availability. I say this as someone who’s been a major skeptic up until this point.

replies(9): >>43748425 #>>43749118 #>>43749224 #>>43751750 #>>43753576 #>>43755736 #>>43756318 #>>43756466 #>>43812541 #
MoonGhost ◴[] No.43749224[source]
LLMs at this point are stateless calculators without personal experience, life goals, obligations, etc. Till recently people expected to have a character like Terminator or HAL. Now we have intelligence separate from 'soul'. Can calculator be AGI? It can be Artificial, General, and Intelligence. We may need another word for 'creature' with some features of living being.
replies(1): >>43750519 #
dcow ◴[] No.43750519[source]
The term AI has always bothered me for this reason. If the thing is intelligent, then there’s nothing artificial about it… it’s almost an oxymoron.

There are two subtly different definitions in use: (1) “like intelligence in useful ways, but not actually”, and (2) “actually intelligent, but not of human wetware”. I take the A in AGI to be of type (2).

LLMs are doing (1), right now. They may have the “neurological structure” required for (2), but to make a being General and Intelligent it needs to compress its context window persist it to storage every night as it sleeps. It needs memory and agency. It needs to be able to learn in real time and self-adjusting its own weights. And if it’s doing all that, then who is to say it doesn't have a soul?

replies(1): >>43750620 #
Jensson ◴[] No.43750620[source]
> If the thing is intelligent, then there’s nothing artificial about it… it’s almost an oxymoron.

Artificial means human made, if we made a thing that is intelligent, then it is artificial intelligence.

It is like "artificial insemination" means a human designed system to inseminate rather than the natural way. It is still a proper insemination, artificial doesn't mean "fake", it just means unnatural/human made.

replies(2): >>43750727 #>>43754511 #
europeanNyan ◴[] No.43750727[source]
> Artificial means human made, if we made a thing that is intelligent, then it is artificial intelligence.

Aren't humans themselves essentially human made?

Maybe a better definition would be non-human (or inorganic if we want to include intelligence like e.g. dolphins)?

replies(3): >>43750798 #>>43752307 #>>43754375 #
caconym_ ◴[] No.43754375[source]
> Aren't humans themselves essentially human made?

No, not in the sense in which the word "made" is being used here.

> Maybe a better definition would be non-human (or inorganic if we want to include intelligence like e.g. dolphins)?

Neither of these work. Calling intelligence in animals "artificial" is absurd, and "inorganic" arbitrarily excludes "head cheese" style approaches to building artificial intelligence.

"Artificial" strongly implies mimicry of something that occurs naturally, and is derived from the same root as "artifice", which can be defined as "to construct by means of skill or specialized art". This obviously excludes the natural biological act of reproduction that produces a newborn human brain (and support equipment) primed to learn and grow; reportedly, sometimes people don't even know they're pregnant until they go into labor (and figure out that's what's happening).

replies(1): >>43756396 #
1. kridsdale3 ◴[] No.43756396[source]
If I asked my wife if she made our son, she would say yes. It is literally called "labour". Then there is "emotional labour" that lasts for 10 years to do the post-training.
replies(1): >>43757366 #
2. caconym_ ◴[] No.43757366[source]
I drove my car to work today, and while I was at work I drove a meeting. Does this mean my car is a meeting? My meeting was a car?

It turns out that some (many, in fact) words mean different things in different contexts. My comment makes an explicit argument concerning the connotations and nuances of the word "made" used in this context, and you have not responded to that argument.

replies(1): >>43764034 #
3. dcow ◴[] No.43764034[source]
Judging by this response, I’m guessing you don’t have children of your own. Otherwise you might understand the context.
replies(1): >>43775638 #
4. caconym_ ◴[] No.43775638{3}[source]
Your guess is wrong!

Maybe you should have written a substantive response to my comments instead of trying and failing to dunk on me. Maybe you don't understand as much as you think you do.

replies(1): >>43778723 #
5. dcow ◴[] No.43778723{4}[source]
I honestly don’t care enough to even have even remotely thought about my reply as trying to dunk on anything. You’re awfully jacked up for a comment so far down an old thread that you and I are probably the only ones who will ever read it.
replies(1): >>43778855 #
6. caconym_ ◴[] No.43778855{5}[source]
Okay!