←back to thread

196 points yuedongze | 2 comments | | HN request time: 0.002s | source
Show context
cons0le ◴[] No.46195737[source]
I directly asked gemini how to get world peace. It said the world should prioritize addressing climate change, inequality, and discrimination. Yeah - we're not gonna do any of that shit. So I don't know what the point of "superintelligent" AI is if we aren't going to even listen to it for the basic big picture stuff. Any sort of "utopia" that people imagine AI bringing is doomed to fail because we already can't cooperate without AI
replies(7): >>46195753 #>>46195849 #>>46195909 #>>46195941 #>>46196273 #>>46196325 #>>46199450 #
1. potsandpans ◴[] No.46195849[source]
I don't believe that this is going to happen, but the primary arguments revolving around a "super intelligent" ai involve removing the need for us to listen to it.

A super intelligent ai would have agency, and when incentives are not aligned would be adversarial.

In the caricature scenario, we'd ask, "super ai, how to achieve world peace?" It would answer the same way, but then solve it in a non-human centric approach: reducing humanities autonomy over the world.

Fixed: anthropogenic climate change resolved, inequality and discrimination reduced (by reducing population by 90%, and putting the rest in virtual reality)

replies(1): >>46196006 #
2. ASalazarMX ◴[] No.46196006[source]
If out AIs achieve something like this, but they managed to give them the same values the minds in Iain Bank's Culture Series had, I think humanity would be golden.