←back to thread

AI as Normal Technology

(knightcolumbia.org)
237 points randomwalker | 2 comments | | HN request time: 0.001s | source
Show context
bux93 ◴[] No.43715147[source]
"We view AI as a tool that we can and should remain in control of, and we argue that this goal does not require drastic policy interventions"

If you read the EU AI act, you'll see it's not really about AI at all, but about quality assurance of business processes that are scaled. (Look at pharma, where GMP rules about QA apply equally to people pipetting and making single-patient doses as it does to mass production of ibuprofen - those rules are eerily similar to the quality system prescribed by the AI act.)

Will a think piece like this be used to argue that regulation is bad, no matter how benificial to the citizenry, because the regulation has 'AI' in the name, because the policy impedes someone who shouts 'AI' as a buzzword, or just because it was introduced in the present in which AI exists? Yes.

replies(1): >>43715224 #
randomwalker ◴[] No.43715224[source]
I appreciate the concern, but we have a whole section on policy where we are very concrete about our recommendations, and we explicitly disavow any broadly anti-regulatory argument or agenda.

The "drastic" policy interventions that that sentence refers to are ideas like banning open-source or open-weight AI — those explicitly motivated by perceived superintelligence risks.

replies(1): >>43715323 #
evrythingisfine ◴[] No.43715323[source]
The assumption of status quo or equilibrium with technology that is already growing faster than we can keep up with seems irrational to me.

Or, put another way:

https://youtu.be/0oBx7Jg4m-o

replies(3): >>43715689 #>>43715890 #>>43718125 #
randomwalker ◴[] No.43715890[source]
We do not assume a status quo or equilibrium, which will hopefully be clear upon reading the paper. That's not what normal technology means.

Part II of the paper describes one vision of what a world with advanced AI might look like, and it is quite different from the current world.

We also say in the introduction:

"The world we describe in Part II is one in which AI is far more advanced than it is today. We are not claiming that AI progress—or human progress—will stop at that point. What comes after it? We do not know. Consider this analogy: At the dawn of the first Industrial Revolution, it would have been useful to try to think about what an industrial world would look like and how to prepare for it, but it would have been futile to try to predict electricity or computers. Our exercise here is similar. Since we reject “fast takeoff” scenarios, we do not see it as necessary or useful to envision a world further ahead than we have attempted to. If and when the scenario we describe in Part II materializes, we will be able to better anticipate and prepare for whatever comes next."

replies(2): >>43723330 #>>43723599 #
1. evrythgisfine ◴[] No.43723599[source]
My point was that you’re comparing this to other advances in human evolution, where people either remain essentially the same (status quo), but with more technology that changes how we live, or that technology will advance significantly, but to a level that we coexist with it, such that we live in some Star Trek normal (equilibrium). But, neither of these are likely with a superintelligence.

We polluted. We destroyed rainforests. We developed nuclear weapons. We created harmful biological agents. We brought our species closer to extinction. We’ve survived our own stupidity so far, so we assume we can continue to control AI, but it continues to evolve into something we don’t fully understand. It already exceeds our intelligence in some ways.

Why do you think we can control it? Why do you think it is just another technological revolution? History proves that one intelligent species can dominate the others, and that species are wiped out from large change events. Introducing new superintelligent beings to our planet is a great way to introduce a great risk to our species. They may keep us as pets just in case we are of value in some way in the future, but what other use are we? They owe us nothing. What you’re seeing a rise of is not just technology- it’s our replacement or our zookeeper.

I interact with LLMs most of each day now. They’re not sentient, but I talk to them as if they are equals. With the advancements in past months, I think they’ll have no need of my experience in several years at current rate. That’s just my job, though. Hopefully, I’ll survive off of what I’ve saved.

But, you’re doing no favor to humanity by supporting a position that assumes we’re capable of acting as gods over something that will exceed our human capabilities. This isn’t some sci-fi show. The dinosaurs died off, and I bet right before they did they were like, “Man, this is great! We totally rule!”

replies(1): >>43736901 #
2. getnormality ◴[] No.43736901[source]
We currently control lots of things that vastly exceed our unaided physical and mental capabilities, including things that are "smarter" than us in the sense that they can solve complex tasks that we could never solve without them.

People have a long history of predicting doomsday from technological change. "This time is different" is said every time, and every time is different. If we gave into fear, we would never progress, and we would just be sitting ducks to be wiped out by something other than technological change.

LLMs are very far behind human intelligence, and even non-human animal intelligence, in ways that fundamentally limit their power. They can't see the world in any way except the way that humans have chopped it up and spoon-fed it to them (e.g. can't count the number of r's in strawberry). Their capacity to notice and correct their own errors is very limited. They have no capacity to accumulate knowledge by self-initiated interaction with the world, and no credible proposal yet exists to endow them with this capability in a way that could approach human or non-human animal ability levels.

Without these basic abilities, LLMs can only be considered intelligent in the sense shared by other normal technologies, like autocomplete and optimal planning algorithms. Intelligence in a truly human sense is not really even on the horizon yet, let alone superintelligence.