←back to thread

192 points beedeebeedee | 3 comments | | HN request time: 0.201s | source
Show context
userbinator ◴[] No.41900552[source]
I hope said intern finds a new job working for anti-AI causes.
replies(4): >>41900835 #>>41901097 #>>41901626 #>>41910243 #
0xDEAFBEAD ◴[] No.41900835[source]
Are there are a lot of anti-AI organizations at this point? PauseAI is the main one I'm familiar with:

https://pauseai.info/

One thing I suspect investors in e.g. OpenAI are failing to price in is the political and regulatory headwinds OpenAI will face if their fantastical revenue projections actually materialize. A world where OpenAI is making $100B in annual revenue will likely be a world where technological unemployment looms quite clearly. Polls already show strong support for regulating AI.

replies(4): >>41901104 #>>41901132 #>>41901479 #>>41902010 #
sadeshmukh ◴[] No.41901479[source]
Regulation supports the big players. See SB 1047 in California and read the first few lines: > comply with various requirements, including implementing the capability to promptly enact a full shutdown, as defined, and implement a written and separate safety and security protocol, as specified

That absolutely kills open source, and it's disguised as a "safety" bill where safety means absolutely nothing (how are you "shutting down" an LLM?). There's a reason Anthropic was championing it even though it evidently regulates AI.

replies(1): >>41901722 #
1. 0xDEAFBEAD ◴[] No.41901722[source]
>That absolutely kills open source

Zvi says this claim is false: https://thezvi.substack.com/p/guide-to-sb-1047?open=false#%C...

>how are you "shutting down" an LLM?

Pull the plug on the server? Seems like it's just about having a protocol in place to make that easy in case of an emergency. Doesn't seem that onerous.

replies(2): >>41904084 #>>41911494 #
2. Tostino ◴[] No.41904084[source]
Which server? The one you have no idea about because you released your weights and anyone can download/use them at that point?
3. sadeshmukh ◴[] No.41911494[source]
To be fair, I don't really agree with the concept of "safety" in AI in the whole Terminator-esque thing that is propagated by seemingly a lot of people. Safety is always in usage, and the cat's already out of the bag. I just don't know what harm they're trying to prevent anyways at all.