←back to thread

321 points jhunter1016 | 2 comments | | HN request time: 0.001s | source
Show context
twoodfin ◴[] No.41878632[source]
Stay for the end and the hilarious idea that OpenAI’s board could declare one day that they’ve created AGI simply to weasel out of their contract with Microsoft.
replies(4): >>41878980 #>>41878982 #>>41880653 #>>41880775 #
candiddevmike ◴[] No.41878982[source]
Ask a typical "everyday joe" and they'll probably tell you they already did due to how ChatGPT has been reported and hyped. I've spoken with/helped quite a few older folks who are terrified that ChatGPT in its current form is going to kill them.
replies(5): >>41879058 #>>41879151 #>>41880771 #>>41881072 #>>41881131 #
ilrwbwrkhv ◴[] No.41879058[source]
It's crazy to me that anybody thinks that these models will end up with AGI. AGI is such a different concept from what is happening right now which is pure probabilistic sampling of words that anybody with a half a brain who doesn't drink the Kool-Aid can easily identify.

I remember all the hype open ai had done before the release of chat GPT-2 or something where they were so afraid, ooh so afraid to release this stuff and now it's a non-issue. it's all just marketing gimmicks.

replies(7): >>41879115 #>>41880616 #>>41880738 #>>41880753 #>>41880843 #>>41881009 #>>41881023 #
usaar333 ◴[] No.41880616[source]
Something that actually could predict the next token 100% correctly would be omniscient.

So I hardly see why this is inherently crazy. At most I think it might not be scalable.

replies(5): >>41880785 #>>41880817 #>>41880825 #>>41881319 #>>41884267 #
edude03 ◴[] No.41880785{3}[source]
What does it mean to predict the next token correctly though? Arguably (non instruction tuned) models already regurgitate their training data such that it'd complete "Mary had a" with "little lamb" 100% of the time.

On the other hand if you mean, give you the correct answer to your question 100% of the time, then I agree, though then what about things that are only in your mind (guess the number I'm thinking type problems)?

replies(3): >>41880909 #>>41880961 #>>41881642 #
card_zero ◴[] No.41880909{4}[source]
This highlights something that's wrong about arguments for AI.

I say: it's not human-like intelligence, it's just predicting the next token probabilistically.

Some AI advocate says: humans are just predicting the next token probabilistically, fight me.

The problem here is that "predicting the next token probabilistically" is a way of framing any kind of cleverness, up to and including magical, impossible omniscience. That doesn't mean it's the way every kind of cleverness is actually done, or could realistically be done. And it has to be the correct next token, where all the details of what's actually required are buried in that term "correct", and sometimes it literally means the same as "likely", and other times that just produces a reasonable, excusable, intelligence-esque effort.

replies(2): >>41881075 #>>41881663 #
1. dylan604 ◴[] No.41881075{5}[source]
> Some AI advocate says: humans are just predicting the next token probabilistically, fight me.

We've all had conversations with humans that are always jumping to complete your sentence assuming they know what your about to say and don't quite guess correctly. So AI evangelists are saying it's no worse than humans as their proof. I kind of like their logic. They never claimed to have built HAL /s

replies(1): >>41881314 #
2. card_zero ◴[] No.41881314[source]
No worse than a human on autopilot.