←back to thread

321 points jhunter1016 | 1 comments | | HN request time: 0s | source
Show context
twoodfin ◴[] No.41878632[source]
Stay for the end and the hilarious idea that OpenAI’s board could declare one day that they’ve created AGI simply to weasel out of their contract with Microsoft.
replies(4): >>41878980 #>>41878982 #>>41880653 #>>41880775 #
candiddevmike ◴[] No.41878982[source]
Ask a typical "everyday joe" and they'll probably tell you they already did due to how ChatGPT has been reported and hyped. I've spoken with/helped quite a few older folks who are terrified that ChatGPT in its current form is going to kill them.
replies(5): >>41879058 #>>41879151 #>>41880771 #>>41881072 #>>41881131 #
ilrwbwrkhv ◴[] No.41879058[source]
It's crazy to me that anybody thinks that these models will end up with AGI. AGI is such a different concept from what is happening right now which is pure probabilistic sampling of words that anybody with a half a brain who doesn't drink the Kool-Aid can easily identify.

I remember all the hype open ai had done before the release of chat GPT-2 or something where they were so afraid, ooh so afraid to release this stuff and now it's a non-issue. it's all just marketing gimmicks.

replies(7): >>41879115 #>>41880616 #>>41880738 #>>41880753 #>>41880843 #>>41881009 #>>41881023 #
usaar333 ◴[] No.41880616[source]
Something that actually could predict the next token 100% correctly would be omniscient.

So I hardly see why this is inherently crazy. At most I think it might not be scalable.

replies(5): >>41880785 #>>41880817 #>>41880825 #>>41881319 #>>41884267 #
edude03 ◴[] No.41880785{3}[source]
What does it mean to predict the next token correctly though? Arguably (non instruction tuned) models already regurgitate their training data such that it'd complete "Mary had a" with "little lamb" 100% of the time.

On the other hand if you mean, give you the correct answer to your question 100% of the time, then I agree, though then what about things that are only in your mind (guess the number I'm thinking type problems)?

replies(3): >>41880909 #>>41880961 #>>41881642 #
1. usaar333 ◴[] No.41881642{4}[source]
> What does it mean to predict the next token correctly though? Arguably (non instruction tuned) models already regurgitate their training data such that it'd complete "Mary had a" with "little lamb" 100% of the time.

The unseen test data.

Obviously omniscience is physically impossible. The point though is that the better and better next token prediction is, the more intelligent the system must be.