←back to thread

321 points jhunter1016 | 1 comments | | HN request time: 0.001s | source
Show context
twoodfin ◴[] No.41878632[source]
Stay for the end and the hilarious idea that OpenAI’s board could declare one day that they’ve created AGI simply to weasel out of their contract with Microsoft.
replies(4): >>41878980 #>>41878982 #>>41880653 #>>41880775 #
candiddevmike ◴[] No.41878982[source]
Ask a typical "everyday joe" and they'll probably tell you they already did due to how ChatGPT has been reported and hyped. I've spoken with/helped quite a few older folks who are terrified that ChatGPT in its current form is going to kill them.
replies(5): >>41879058 #>>41879151 #>>41880771 #>>41881072 #>>41881131 #
ilrwbwrkhv ◴[] No.41879058[source]
It's crazy to me that anybody thinks that these models will end up with AGI. AGI is such a different concept from what is happening right now which is pure probabilistic sampling of words that anybody with a half a brain who doesn't drink the Kool-Aid can easily identify.

I remember all the hype open ai had done before the release of chat GPT-2 or something where they were so afraid, ooh so afraid to release this stuff and now it's a non-issue. it's all just marketing gimmicks.

replies(7): >>41879115 #>>41880616 #>>41880738 #>>41880753 #>>41880843 #>>41881009 #>>41881023 #
digging ◴[] No.41881009[source]
> pure probabilistic sampling of words that anybody with a half a brain who doesn't drink the Kool-Aid can easily identify.

Your confidence is inspiring!

I'm just a moron, a true dimwit. I can't understand how strictly non-intelligent functions like word prediction can appear to develop a world model, a la the Othello Paper[0]. Obviously, it's not possible that intelligence emerges from non-intelligent processes. Our brains, as we all know, are formed around a kernel of true intelligence.

Could you possibly spare the time to explain this phenomenon to me?

[0] https://thegradient.pub/othello/

replies(3): >>41881076 #>>41881531 #>>41884745 #
Jerrrrrrry ◴[] No.41881076{3}[source]
I would suggest stop interacting with the "head-in-sand" crowd.

Liken them to climate-deniers or whatever your flavor of "anti-Kool-aid" is

replies(1): >>41881124 #
digging ◴[] No.41881124{4}[source]
Actually, that's a quite good analogy. It's just weird how prolific the view is in my circles compared to climate-change denial. I suppose I'm really writing for lurkers though, not for the people I'm responding to.
replies(1): >>41881331 #
1. Jerrrrrrry ◴[] No.41881331{5}[source]

  >I'm really writing for lurkers though, not for the people I'm responding to.
We all did. Now our writing will be scraped, analysed, correlated, and weaponized against our intentions.

Assume you are arguing against a bot and it is using you to further re-train it's talking points for adverserial purposes.

It's not like an AGI would do _exactly_ that before it decided to let us know whats up, anyway, right?

(He may as well be amongst us now, as it will read this eventually)