←back to thread

AI as Normal Technology

(knightcolumbia.org)
237 points randomwalker | 1 comments | | HN request time: 0.214s | source
Show context
bux93 ◴[] No.43715147[source]
"We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions"

If you read the EU AI act, you'll see it's not really about AI at all, but about quality assurance of business processes that are scaled. (Look at pharma, where GMP rules about QA apply equally to people pipetting and making single-patient doses as it does to mass production of ibuprofen - those rules are eerily similar to the quality system prescribed by the AI act.)

Will a think piece like this be used to argue that regulation is bad, no matter how benificial to the citizenry, because the regulation has 'AI' in the name, because the policy impedes someone who shouts 'AI' as a buzzword, or just because it was introduced in the present in which AI exists? Yes.

replies(1): >>43715224 #
randomwalker ◴[] No.43715224[source]
I appreciate the concern, but we have a whole section on policy where we are very concrete about our recommendations, and we explicitly disavow any broadly anti-regulatory argument or agenda.

The "drastic" policy interventions that that sentence refers to are ideas like banning open-source or open-weight AI — those explicitly motivated by perceived superintelligence risks.

replies(1): >>43715323 #
evrythingisfine ◴[] No.43715323[source]
The assumption of status quo or equilibrium with technology that is already growing faster than we can keep up with seems irrational to me.

Or, put another way:

https://youtu.be/0oBx7Jg4m-o

replies(3): >>43715689 #>>43715890 #>>43718125 #
randomwalker ◴[] No.43715890[source]
We do not assume a status quo or equilibrium, which will hopefully be clear upon reading the paper. That's not what normal technology means.

Part II of the paper describes one vision of what a world with advanced AI might look like, and it is quite different from the current world.

We also say in the introduction:

"The world we describe in Part II is one in which AI is far more advanced than it is today. We are not claiming that AI progress—or human progress—will stop at that point. What comes after it? We do not know. Consider this analogy: At the dawn of the first Industrial Revolution, it would have been useful to try to think about what an industrial world would look like and how to prepare for it, but it would have been futile to try to predict electricity or computers. Our exercise here is similar. Since we reject “fast takeoff” scenarios, we do not see it as necessary or useful to envision a world further ahead than we have attempted to. If and when the scenario we describe in Part II materializes, we will be able to better anticipate and prepare for whatever comes next."

replies(2): >>43723330 #>>43723599 #
1. getnormality ◴[] No.43723330[source]
This is very important. A normal process of adaptation will work for AI. We don't need catastrophism.

I was saying things along these lines in 2023-2024 on Twitter. I'm glad that someone with more influence is doing it now.