←back to thread

AI as Normal Technology

(knightcolumbia.org)
237 points randomwalker | 1 comments | | HN request time: 0.35s | source
Show context
xpe ◴[] No.43717165[source]
> The statement “AI is normal technology” is three things: a description of current AI, a prediction about the foreseeable future of AI, and a prescription about how we should treat it.

A question for the author(s), at least one of whom is participating in the discussion (thanks!): Why try to lump together description, prediction, and prescription under the "normal" adjective?

Discussing AI is fraught. My claim: conflating those three under the "normal" label seems likely to backfire and lead to unnecessary confusion. Why not instead keep these separate?

My main objection is this: it locks in a narrative that tries to neatly fuse description, prediction, and prescription. I recoil at this; it feels like an unnecessary coupling. Better to remain fluid and not lock in a narrative. The field is changing so fast, making description by itself very challenging. Predictions should update on new information, including how we frame the problem and our evolving values.

A little bit about my POV in case it gives useful context: I've found the authors (Narayanan and Kapoor) to be quite level-headed and sane w.r.t. AI discussions, unlike many others. I'll mention Gary Marcus as one counterexample; I find it hard to pin Marcus down on the actual form of his arguments or concrete predictions. His pieces often feel like rants without a clear underlying logical backbone (at least in the year or so I've read his work).

replies(2): >>43718651 #>>43720435 #
1. randomwalker ◴[] No.43720435[source]
Thanks for the comment! I agree — it's important to remain fluid. We've taken steps to make sure that predictively speaking, the normal technology worldview is empirically testable. Some of those empirical claims are in this paper and others in coming in follow-ups. We are committed to revising our thinking if it turns out that our framework doesn't generate good predictions and effective prescriptions.

We do try to admit it when we get things wrong. One example is our past view (that we have since repudiated) that worrying about superintelligence distracts from more immediate harms.