←back to thread

174 points Philpax | 1 comments | | HN request time: 0s | source
Show context
dicroce ◴[] No.43719918[source]
Doesn't even matter. The capabilities of the AI that's out NOW will take a decade or more to digest.
replies(3): >>43719953 #>>43722914 #>>43747545 #
EA-3167 ◴[] No.43719953[source]
I feel like it's already been pretty well digested and excreted for the most part, now we're into the re-ingestion phase until the bubble bursts.
replies(4): >>43719975 #>>43720000 #>>43720090 #>>43720159 #
dicroce ◴[] No.43720159[source]
Not even close. Software can now understand human language... this is going to mean computers can be a lot more places than they ever could. Furthermore, software can now understand the content of images... eventually this will have a wild impact on nearly everything.
replies(2): >>43720259 #>>43722780 #
burnte ◴[] No.43722780[source]
It doesn't understand anything, there is no understanding going on in these models. It takes input and generates output based on the statistical math created from its training set. It's Bayesian statistics and vector/matrix math. There is no cogitation or actual understanding.
replies(1): >>43723563 #
abletonlive ◴[] No.43723563[source]
This is insanely reductionist and mindless regurgitation of what we already know about how the models work. Understanding is a spectrum, it's not binary. We can measurably show that that there is in fact, some kind of understanding.

If you explain a concept to a child you check for understanding by seeing if the output they produce checks out with your understanding of the concept. You don't peer into their brain and see if there are neurons and consciousness happening

replies(1): >>43729248 #
1. burnte ◴[] No.43729248{3}[source]
The method of verification has no bearing on the validity of the conclusion. I don't open a child's head because there are side effects on the functioning of the child post brain-opening. However I can look into the brain of an AI with no such side effects.

This is an example I saw 2 days ago without even searching. Here ChatGPT is telling someone that it independently ran a benchmark on it's MacBook: https://pbs.twimg.com/media/Goq-D9macAApuHy?format=jpg

I'm reasonably sure ChatGPT doesn't have a Macbook, and didn't really run the benchmarks. But It DID produce exactly what you would expect a human to say, which is what it is programmed to do. No understanding, just rote repetition.

I won't post more because there are a billion of them. LLMs are great, but they're not intelligent, they don't understand, and the output still needs validated before use. We have a long way to go, and that's ok.