Most active commenters
  • 0xDEAFBEAD(3)

←back to thread

192 points beedeebeedee | 13 comments | | HN request time: 0.681s | source | bottom
Show context
userbinator ◴[] No.41900552[source]
I hope said intern finds a new job working for anti-AI causes.
replies(4): >>41900835 #>>41901097 #>>41901626 #>>41910243 #
1. 0xDEAFBEAD ◴[] No.41900835[source]
Are there are a lot of anti-AI organizations at this point? PauseAI is the main one I'm familiar with:

https://pauseai.info/

One thing I suspect investors in e.g. OpenAI are failing to price in is the political and regulatory headwinds OpenAI will face if their fantastical revenue projections actually materialize. A world where OpenAI is making $100B in annual revenue will likely be a world where technological unemployment looms quite clearly. Polls already show strong support for regulating AI.

replies(4): >>41901104 #>>41901132 #>>41901479 #>>41902010 #
2. bawolff ◴[] No.41901104[source]
Regulation is not neccesarily bad for the market leader.
3. jazzyjackson ◴[] No.41901132[source]
The Amish?

I'm trying to think of whether it'd be worth starting some kind of semi-Luddite community where we can use digital technology, photos, radios, spreadsheets and all, but the line is around 2014, when computers still did the same thing every time. That's my biggest gripe with AI, the nondeterminism, the non-repeatability making it all undebuggable, impossible to interrogate and reason about. A computer in 2014 is complex but not incomprehensible. The mass matrix multiplication of 2024 computation is totally opaque and frankly I think there's room for a society without such black box oracles.

replies(2): >>41901272 #>>41903165 #
4. fragmede ◴[] No.41901272[source]
Why 2014? Why not 2022 when ChatGPT was released? Or 2019 for ChatGPT 2? Why not 2005 when the first dual-core Pentium was released? After that, the two cores meant that you could be sure what order your program would run things. Or why not 2012 when Intel added the RdRand instruction to x86? Or 2021 when Linux 5.17 was released with random number generation improvements? Or 1985 when IEEE 754 floating point was released. Before that, it was all integer math but after that, 0.1 + 0.2 = 0.30000000000000004. Not that I have any objection to 2014, I'm just wondering why you chose then.
replies(1): >>41901966 #
5. sadeshmukh ◴[] No.41901479[source]
Regulation supports the big players. See SB 1047 in California and read the first few lines: > comply with various requirements, including implementing the capability to promptly enact a full shutdown, as defined, and implement a written and separate safety and security protocol, as specified

That absolutely kills open source, and it's disguised as a "safety" bill where safety means absolutely nothing (how are you "shutting down" an LLM?). There's a reason Anthropic was championing it even though it evidently regulates AI.

replies(1): >>41901722 #
6. 0xDEAFBEAD ◴[] No.41901722[source]
>That absolutely kills open source

Zvi says this claim is false: https://thezvi.substack.com/p/guide-to-sb-1047?open=false#%C...

>how are you "shutting down" an LLM?

Pull the plug on the server? Seems like it's just about having a protocol in place to make that easy in case of an emergency. Doesn't seem that onerous.

replies(2): >>41904084 #>>41911494 #
7. jazzyjackson ◴[] No.41901966{3}[source]
If I was really picky I would stop the clock in the 8bit era or at least well before speculative execution / branch prediction, but I do want to leave some room for pragmatism.

2014 is when I became aware of gradient descent and how entropy was used to search more effectively, leading to different runs of the same program arriving at different results, Deep Dream came soon after and it's been downhill from there

If I were to write some regulations for what was allowed in my computing community I would make an exception for using PRNGs for scientific simulation and cryptographic purposes, but definitely I would draw a line at using heuristics to find optimal solutions. Slide rules got us to the moon and that's good enough for me.

8. pjc50 ◴[] No.41902010[source]
SAG-AFTRA are currently on strike over the issue of unauthorized voice cloning.

The AI advocates actively advertised AI as a tool for replacing creatives, including plagiarizing their work, and copying the appearance and voices of individuals. It's not really surprising that everyone in the creative industries is going to use what little power they have to avoid this doomsday scenario.

9. 542458 ◴[] No.41903165[source]
Fwiw, the Amish aren’t luddites, they’re not anti-technology in all facets of life. You’ll see Amish folks using power tools, cellphones, computers, etc in their professional lives or outside the context of their homes (exact standards vary by community). There are even multiple companies that manufacture computers specifically for the Amish. So there’s no reason an Amish business couldn’t use AI.
replies(1): >>41905760 #
10. Tostino ◴[] No.41904084{3}[source]
Which server? The one you have no idea about because you released your weights and anyone can download/use them at that point?
11. 0xDEAFBEAD ◴[] No.41905760{3}[source]
Don't they have a process for determining whether new technology should be integrated into their lives?
replies(1): >>41909398 #
12. 542458 ◴[] No.41909398{4}[source]
Yes, the exact process varies by community but it generally involves church elders meeting to discuss whether a new technology is likely to benefit or harm family, community and spiritual life.
13. sadeshmukh ◴[] No.41911494{3}[source]
To be fair, I don't really agree with the concept of "safety" in AI in the whole Terminator-esque thing that is propagated by seemingly a lot of people. Safety is always in usage, and the cat's already out of the bag. I just don't know what harm they're trying to prevent anyways at all.