←back to thread

321 points jhunter1016 | 2 comments | | HN request time: 0.448s | source
Show context
twoodfin ◴[] No.41878632[source]
Stay for the end and the hilarious idea that OpenAI’s board could declare one day that they’ve created AGI simply to weasel out of their contract with Microsoft.
replies(4): >>41878980 #>>41878982 #>>41880653 #>>41880775 #
candiddevmike ◴[] No.41878982[source]
Ask a typical "everyday joe" and they'll probably tell you they already did due to how ChatGPT has been reported and hyped. I've spoken with/helped quite a few older folks who are terrified that ChatGPT in its current form is going to kill them.
replies(5): >>41879058 #>>41879151 #>>41880771 #>>41881072 #>>41881131 #
ilrwbwrkhv ◴[] No.41879058[source]
It's crazy to me that anybody thinks that these models will end up with AGI. AGI is such a different concept from what is happening right now which is pure probabilistic sampling of words that anybody with a half a brain who doesn't drink the Kool-Aid can easily identify.

I remember all the hype open ai had done before the release of chat GPT-2 or something where they were so afraid, ooh so afraid to release this stuff and now it's a non-issue. it's all just marketing gimmicks.

replies(7): >>41879115 #>>41880616 #>>41880738 #>>41880753 #>>41880843 #>>41881009 #>>41881023 #
usaar333 ◴[] No.41880616[source]
Something that actually could predict the next token 100% correctly would be omniscient.

So I hardly see why this is inherently crazy. At most I think it might not be scalable.

replies(5): >>41880785 #>>41880817 #>>41880825 #>>41881319 #>>41884267 #
edude03 ◴[] No.41880785[source]
What does it mean to predict the next token correctly though? Arguably (non instruction tuned) models already regurgitate their training data such that it'd complete "Mary had a" with "little lamb" 100% of the time.

On the other hand if you mean, give you the correct answer to your question 100% of the time, then I agree, though then what about things that are only in your mind (guess the number I'm thinking type problems)?

replies(3): >>41880909 #>>41880961 #>>41881642 #
1. cruffle_duffle ◴[] No.41880961[source]
But now you are entering into philosophy. What does a “correct answer” even mean for a question like “is it safe to lick your fingers after using a soldering iron with leaded solder?”. I would assert that there is no “correct answer” to a question like that.

Is it safe? Probably. But it depends, right? How did you handle the solder? How often are you using the solder? Were you wearing gloves? Did you wash your hands before licking your fingers? What is your age? Why are you asking the question? Did you already lick your fingers and need to know if you should see a doctor? Is it hypothetical?

There is no “correct answer” to that question. Some answers are better than others, yes, but you cannot have a “correct answer”.

And I did assert we are entering into philosophy and what it means to know something as well as what truth even means.

replies(1): >>41881141 #
2. _blk ◴[] No.41881141[source]
Great break-down. Yes, the older you are, the safer it is.

Speaking of Microsoft cooperation: I can totally see a whole series of windows 95 style popup dialogs asking you all those questions one by one in the next product iteration.