←back to thread

614 points nickthegreek | 1 comments | | HN request time: 0s | source
Show context
trinsic2 ◴[] No.39122579[source]
Based on everything I am hearing about all the harmful uses this tech could have on society, i'm wondering if this situation is alarming enough to warrant an inquiry of some kind to determine whats going on behind the scenes.

It seems like this situation is serious enough that we cannot let this kind of work be privatized.

Not interested in entertaining all the "this is the norm" arguments, that's just an attempt at getting people to normalize this behavior.

Does anyone know if the Center of AI Safety acting for the public good and is this on their radar?

replies(3): >>39122627 #>>39123186 #>>39123521 #
JumpCrisscross ◴[] No.39122627[source]
> wondering if this situation is alarming enough to warrant an inquiry of some kind to determine whats going on behind the scenes

OpenAI is making people rich and America look good, all while not doing anything obviously harmful to the public interest. They’re not a juicy target for anyone in the public sphere. If any one of those changes, OpenAI and possibly its leadership are in extremely hot water with the authorities.

replies(2): >>39122752 #>>39123545 #
trinsic2 ◴[] No.39122752[source]
> all while not doing anything obviously harmful to the public interest.

Yeah, gonna have to challenge that:

1. We don't really if what they are doing is harming public interest, because we dont have access to much information about whats happening behind the scenes.

2. And there is enough information about this tech that leads to the possibility of it causing systemic damage to society if its not correctly controlled.

replies(3): >>39122763 #>>39122807 #>>39124666 #
JumpCrisscross ◴[] No.39122763[source]
> We don't really if what they are doing is harming public interest

That’s potentially harmful.

> is enough information about this tech that leads to the possibility of it causing systemic damage

Far from established. Hypothetically harmful. Obvious harm would need to be present and provable. (Otherwise, it’s a political question.)

replies(1): >>39124364 #
cj ◴[] No.39124364[source]
You could say the same thing about a hypertargeted ad platform optimized for political use cases (Cambridge analytica) before they were outed.

I think the point is it would be good to investigate future hypothetical harm before it becomes present and provable, at which point it’s too late.

replies(1): >>39126067 #
1. JumpCrisscross ◴[] No.39126067[source]
> it would be good to investigate future hypothetical harm

Sure. That’s why we have the fourth estate. We don’t have anything close to what it would take to launch inquiries.