←back to thread

Playing in the Creek

(www.hgreer.com)
346 points c1ccccc1 | 6 comments | | HN request time: 0.875s | source | bottom
1. MrBuddyCasino ◴[] No.43651572[source]
That was a well written essay with a non-sequitur AI Safety thing tacked to the end. His real world examples were concrete, and the reason to stop escalating easy to understand ("don't flood the neighbourhood by building a real dam").

The AI angle is not only even hypothetical: there is no attempt to describe or reason about a concrete "x leading to y", just "see, the same principle probably extrapolates".

There is no argument there that is sounder than "the high velocities of steam locomotives might kill you" that people made 200 years ago.

replies(2): >>43651756 #>>43652038 #
2. luc4sdreyer ◴[] No.43651756[source]
> the high velocities of steam locomotives might kill you

This obviously seems silly in hindsight. Warnings about radium watches or asbestos sound less silly, or even wise. But neither had any solid scientific studies showing clear hazard and risk. Just people being good Bayesian agents, trying to ride the middle of the exploration vs. exploitation curve.

Maybe it makes sense to spend some percentage of AI development resources on trying to understand how they work, and how they can fail.

replies(2): >>43652249 #>>43652661 #
3. iNic ◴[] No.43652038[source]
The progress-care trade-off is a difficult one to navigate, and is clearly more important with AI. I've seen people draw analogies to companies, which have often caused harm in pursuit of greater profits, both purposefully and simply as byproducts: oil-spills, overmedication, pollution, ecological damage, bad labor conditions, hazardous materials, mass lead poisoning. Of course, the profit seeking company as an invention has been one of the best humans have ever made, but that doesn't mean we shouldn't take "corp safety" seriously. We pass various laws on how corps can operate and what they can and can not do to limit harms and _align_ them with the goals of society.

So it is with AI. Except, corps are made of people that work on people speeds, and have vague morals and are tied to society in ways AI might not be. AI might also be able to operate faster and with less error. So extra care is required.

4. ripe ◴[] No.43652249[source]
> This [steam locomotives might kill you] obviously seems silly in hindsight.

To be fair, many people did die on level crossings and by wandering on to the tracks.

We learned over time to put in place safety fences and tunnels.

replies(1): >>43653372 #
5. MrBuddyCasino ◴[] No.43652661[source]
> Warnings about radium watches or asbestos sound less silly, or even wise. But neither had any solid scientific studies showing clear hazard and risk.

In the case of asbestos, this is incorrect. Many people knew it was deadly, but the corporations selling it hid it for decades, killing thousands of people. There are quite a few other examples besides asbestos, like leaded fuel or cigarettes.

6. Gracana ◴[] No.43653372{3}[source]
People thought that the speed itself was dangerous, that the wind and vibration and landscape screaming by at 25mph would cause physical and mental harm.