←back to thread

728 points squircle | 1 comments | | HN request time: 0.217s | source
Show context
herculity275 ◴[] No.41224826[source]
The author has also written a short horror story about simulated intelligence which I highly recommend: https://qntm.org/mmacevedo
replies(9): >>41224958 #>>41225143 #>>41225885 #>>41225929 #>>41226053 #>>41226153 #>>41226412 #>>41226845 #>>41227116 #
htk ◴[] No.41226153[source]
Reading mmacevedo was the only time that I actually felt dread related to AI. Excellent short story. Scarier in my opinion than the Roko's Basilisk theory that melted Yudkowsky's brain.
replies(1): >>41226777 #
digging ◴[] No.41226777[source]
> Scarier in my opinion than the Roko's Basilisk theory that melted Yudkowsky's brain.

Is that correct? I thought the Roko's Basilisk post was just seen as really stupid. Agreed that "Lena" is a great, chilling story though.

replies(2): >>41227181 #>>41228532 #
endtime ◴[] No.41227181[source]
It's not correct. IIRC, Eliezer was mad that someone who thought they'd discovered a memetic hazard would be foolish enough to share it, and then his response to this unintentionally invoked the Streisand Effect. He didn't think it was a serious hazard. (Something something precommit to not cooperating with acausal blackmail)
replies(4): >>41227683 #>>41228118 #>>41229694 #>>41230289 #
throwanem ◴[] No.41230289[source]
> precommit to not cooperating with acausal blackmail

He knows that can't possibly work, right? Implicitly it assumes perfect invulnerability to any method of coercion, exploitation, subversion, or suffering that can be invented by an intelligence sufficiently superhuman to have escaped its natal light cone.

There may exist forms of life in this universe for which such an assumption is safe. Humanity circa 2024 seems most unlikely to be among them.

replies(2): >>41230802 #>>41233063 #
endtime ◴[] No.41230802[source]
Eliezer once told me that he thinks people aren't vegetarian because they don't think animals are sapient. And I tried to explain to him that actually most people aren't vegetarian because they don't think about it very much, and don't try to be rigorously ethical in any case, and that by far the most common response to ethical arguments is not "cows aren't sapient" but "you might be right but meat is delicious so I am going to keep eating it". I think EY is so surrounded by bright nerds that he has a hard time modeling average people.

Though in this case, in his defense, average people will never hear about Roko's Basilisk.

replies(5): >>41230902 #>>41231294 #>>41232652 #>>41236655 #>>41237034 #
1. throwanem ◴[] No.41237034[source]
> I think EY is so surrounded by bright nerds that he has a hard time modeling average people.

On reflection, I could've inferred that from his crowd's need for a concept of "typical mind fallacy." I suppose I hadn't thought it all the way through.

I'm in a weird spot on this, I think. I can follow most of the reasoning behind LW/EA/generally "Yudkowskyish" analysis and conclusions, but rarely find anything in them which I feel requires taking very seriously, due both to weak postulates too strongly favored, and to how those folks can't go to the corner store without building a moon rocket first.

I recognize the evident delight in complexity for its own sake, and I do share it. But I also recognize it as something I grew far enough out of to recognize when it's inapplicable and (mostly!) avoid indulging it then.

The thought can feel somewhat strange, because how I see those folks now palpably has much in common with how I myself was often seen in childhood, as the bright nerd I then was. (Both words were often used, not always with unequivocal approbation.) Given a different upbringing I might be solidly in the same cohort, if about as mediocre there as here. But from what I've seen of the results, there seems no substantive reason to regret the difference in outcome.