←back to thread

46 points petethomas | 3 comments | | HN request time: 0.709s | source
Show context
drellybochelly ◴[] No.44397874[source]
Not a big fan of deferring morality to ChatGPT or any AI.
replies(2): >>44397958 #>>44398098 #
1. bevr1337 ◴[] No.44397958[source]
> deferring

Great choice of words. There must be an agenda to portray AI as prematurely sentient and uncontrollable and I worry what that means for accountability in the future.

replies(1): >>44398639 #
2. hinterlands ◴[] No.44398639[source]
It's being used in a way where biases matter. Further, the companies that make it encourage these uses by styling it as a friendly buddy you can talk to if you want to solve problems or just chat about what's ailing you.

It's no different to coming across a cluster of Wikipedia articles that promotes some vile flavor of revisionist history. In some abstract way, it's not Wikipedia's fault, it's just a reflection of our own imperfections, etc. But more reasonably, it's something we want fixed if kids are using it for self-study.

replies(1): >>44399019 #
3. bevr1337 ◴[] No.44399019[source]
> It's no different

There are similarities, I agree, but there are huge differences too. Both should be analyzed. For ex, Wikipedia requires humans in the loop, has accountability processes, has been rigorously tested and used for many years by a vast audience, and has a public, vetted agenda. I think it's much harder for Wikipedia to present bias than pre-digital encyclopedias or a non-deterministic LLM especially because Wikipedia has culture and tooling.