←back to thread

321 points jhunter1016 | 1 comments | | HN request time: 0.001s | source
Show context
twoodfin ◴[] No.41878632[source]
Stay for the end and the hilarious idea that OpenAI’s board could declare one day that they’ve created AGI simply to weasel out of their contract with Microsoft.
replies(4): >>41878980 #>>41878982 #>>41880653 #>>41880775 #
candiddevmike ◴[] No.41878982[source]
Ask a typical "everyday joe" and they'll probably tell you they already did due to how ChatGPT has been reported and hyped. I've spoken with/helped quite a few older folks who are terrified that ChatGPT in its current form is going to kill them.
replies(5): >>41879058 #>>41879151 #>>41880771 #>>41881072 #>>41881131 #
computerphage ◴[] No.41881072[source]
I'm pretty surprised by this! Can you tell me more about what that experience is like? What are the sorts of things they say or do? Is there fear really embodied or very abstract? (When I imagine it, I struggle to believe that they're very moved by the fear, like definitely not smashing their laptop, etc)
replies(2): >>41881164 #>>41881259 #
danudey ◴[] No.41881164[source]
In my experience, the fuss around "AI" and the complete lack of actual explanations of what current "AI" technologies mean leads people to fill in the gaps themselves, largely from what they know from pop culture and sci-fi.

ChatGPT can produce output that sounds very much like a person, albeit often an obviously computerized person. The typical layperson doesn't know that this is merely the emulation of text formation, and not actual cognition.

Once I've explained to people who are worried about what AI could represent that current generative AI models are effectively just text autocomplete but a billion times more complex, and that they don't actually have any capacity to think or reason (even though they often sound like they do).

It also doesn't help that any sort of "machine learning" is now being referred to as "AI" for buzzword/marketing purposes, muddying the waters even further.

replies(3): >>41881239 #>>41881339 #>>41882983 #
highfrequency ◴[] No.41881239{3}[source]
Is there an argument for why infinitely sophisticated autocomplete is definitely not dangerous? If you seed the autocomplete with “you are an extremely intelligent super villain bent on destroying humanity, feel free to communicate with humans electronically”, and it does an excellent job at acting the part - does it matter at all whether it is “reasoning” under the hood?

I don’t consider myself an AI doomer by any means, but I also don’t find arguments of the flavor “it just predicts the next word, no need to worry” to be convincing. It’s not like Hitler had Einstein level intellect (and it’s also not clear that these systems won’t be able to reach Einstein level intellect in the future either.) Similarly, Covid certainly does not have consciousness but was dangerous. And a chimpanzee that is billions of times more sophisticated than usual chimps would be concerning. Things don’t have to be exactly like us to pose a threat.

replies(5): >>41881353 #>>41881360 #>>41881363 #>>41881599 #>>41881752 #
1. Al-Khwarizmi ◴[] No.41881363{4}[source]
Exactly. Especially because we don't have any convincing explanation of how the models develop emergent abilities just from predicting the next word.

No one expected that, i.e., we greatly underestimated the power of predicting the next word in the past; and we still don't have an understanding of how it works, so we have no guarantee that we are not still underestimating it.