←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 6 comments | | HN request time: 0s | source | bottom
Show context
Vegenoid ◴[] No.43585338[source]
I think we've actually had capable AIs for long enough now to see that this kind of exponential advance to AGI in 2 years is extremely unlikely. The AI we have today isn't radically different from the AI we had in 2023. They are much better at the thing they are good at, and there are some new capabilities that are big, but they are still fundamentally next-token predictors. They still fail at larger scope longer term tasks in mostly the same way, and they are still much worse at learning from small amounts of data than humans. Despite their ability to write decent code, we haven't seen the signs of a runaway singularity as some thought was likely.

I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.

replies(9): >>43585429 #>>43585830 #>>43586381 #>>43586613 #>>43586998 #>>43587074 #>>43594397 #>>43619183 #>>43709628 #
boznz ◴[] No.43587074[source]
> we haven't seen the signs of a runaway singularity as some thought was likely.

The signs are not there but while we may not be on an exponential curve (which would be difficult to see), we are definitely on a steep upward one which may get steeper or may fizzle out if LLM's can only reach human level 'intelligence' but not surpass it. Original article was a fun read though and 360,000 words shorter than my very similar fiction novel :-)

replies(2): >>43587434 #>>43597304 #
grey-area ◴[] No.43587434[source]
LLMs don’t have any sort of intelligence at present, they have a large corpus of data and can produce modified copies of it.
replies(2): >>43588092 #>>43588156 #
1. EMIRELADERO ◴[] No.43588156[source]
While certainly not human-level intelligence, I don't see how you could say they don't have any sort of it. There's clearly generalization there. What would you say is the threshold?
replies(1): >>43588871 #
2. dangus ◴[] No.43588871[source]
Seems like you’d have to prove the inverse.

The threshold would be “produce anything that isn’t identical or a minor transfiguration of input training data.”

In my experience my AI assistant in my code editor can’t do a damn thing that isn’t widely documented and sometimes botches tasks that are thoroughly documented (such as hallucinating parameters names that don’t exist). I can witness this when I reach the edge of common use cases where extending beyond the documentation requires following an implication.

For example, AI can’t seem to understand how to help me in any way with Terraform dynamic credentials because the documentation is very sparse, and it is not part of almost any blog posts or examples online. My definition the variable is populated dynamically and real aren’t shown anywhere. I get a lot of irrelevant nonsense suggestions on how to fix it.

AI is a great “amazing search engine” and it can string together combinations of logic that already exist in documentation and examples while changing some names here and there, but what looks like true understanding really is just token prediction.

IMO the massive amount of training data is making the man behind the curtain look way better than he is.

replies(1): >>43588926 #
3. EMIRELADERO ◴[] No.43588926[source]
That's creativity, not intelligence. LLMs can be intelligent while having very little (or even none at all) creativity. I don't believe one necessarily requires the other.
replies(1): >>43588987 #
4. dangus ◴[] No.43588987{3}[source]
That’s a garbage cop-out. Intelligence without creativity is not what AI companies are promising to deliver.

Intelligence without creativity is like selling dictionaries.

replies(1): >>43589066 #
5. EMIRELADERO ◴[] No.43589066{4}[source]
That was an extreme example to illustrate the concept. My point is that reduced/little creativity (which is what the current models have) is not indicative of a total lack of intelligence.
replies(1): >>43592607 #
6. dangus ◴[] No.43592607{5}[source]
Boy have I got a dictionary to sell you!