Most active commenters
  • throwanem(8)
  • endtime(4)
  • vessenes(3)
  • jerf(3)
  • scubbo(3)
  • wizzwizz4(3)
  • Vecr(3)
  • khafra(3)

←back to thread

728 points squircle | 66 comments | | HN request time: 1.277s | source | bottom
1. herculity275 ◴[] No.41224826[source]
The author has also written a short horror story about simulated intelligence which I highly recommend: https://qntm.org/mmacevedo
replies(9): >>41224958 #>>41225143 #>>41225885 #>>41225929 #>>41226053 #>>41226153 #>>41226412 #>>41226845 #>>41227116 #
2. vessenes ◴[] No.41224958[source]
Yess, that's a good one. It made me rethink my "sure I'd get scanned" plans, and put me in the "never allow my children to do that" camp. Extremely creepy.
replies(2): >>41226135 #>>41227490 #
3. ceejayoz ◴[] No.41225143[source]
Guessing that was based off https://en.wikipedia.org/wiki/Henrietta_Lacks a bit.
replies(1): >>41225277 #
4. stordoff ◴[] No.41225277[source]
The author has said the title is a reference to the Lenna test image: https://en.wikipedia.org/wiki/Lenna. Possibly another influence though.
replies(1): >>41226689 #
5. mmikeff ◴[] No.41225885[source]
Just read this last night! as part of the 'Valuable Humans in Transit and Other Stories' collection.
6. NameError ◴[] No.41225929[source]
I bought the short-story collection this is a part of and liked it a lot: https://qntm.org/vhitaos

A lot of the stories are free to read online without buying it but I thought the few dollars for the ebook was worth it

replies(1): >>41229287 #
7. Ancapistani ◴[] No.41226053[source]
This story closely mirrors my (foggy) memory of “2012: The War for Souls” by Whitley Streiber.

Without giving too much away, I recalled a specific story about a human consciousness being enslaved in a particular way, and ChatGPT confirmed that it was included in the book. I don’t think it is hallucinating, as it denied that similar stories I derived from that memory where in the book.

8. LeifCarrotson ◴[] No.41226135[source]
I'm sure you realize it is fiction - one possible dystopian future among an infinite ocean of other futures.

You can just as easily write a sci-fi where the protagonist upload is the Siri/Alexa/Google equivalent personal assistant to most of humanity: More than just telling the smartphone to set a reminder for a wedding reception, it could literally share in their joy, experiencing the whole event distributed among every device in the audience, or more than just a voice trigger from some astronaut to take a picture, it could gaze in awe at the view, selectively melding back their experiences to the rest of the collective so there's no loss when an instance becomes damaged. The protagonist in such a story could have the richest, most complex life imaginable.

It is impactful, for sure, and worthy of consideration, but I don't think you should make decisions based on one scary story.

replies(5): >>41226462 #>>41226542 #>>41226996 #>>41229226 #>>41232255 #
9. htk ◴[] No.41226153[source]
Reading mmacevedo was the only time that I actually felt dread related to AI. Excellent short story. Scarier in my opinion than the Roko's Basilisk theory that melted Yudkowsky's brain.
replies(1): >>41226777 #
10. __MatrixMan__ ◴[] No.41226412[source]
I also like this one: https://qntm.org/responsibility
11. teyrana ◴[] No.41226462{3}[source]
Sounds like you should write that story! I'd love to read that :D
12. jerf ◴[] No.41226542{3}[source]
It is fiction.

But it is also absolutely the case that uploading yourself is flinging yourself irrevocably into a box which you do not and can not control, but other people can. (Or, given the time frame we are talking about, entities in general, about which you may not even want to assume basic humanity.)

I used to think that maybe it was something only the rich could do, but then I realized that even the rich, even if they funded the program from sand and coal to the final product, could never even begin to guarantee that the simulator really was what it said on the tin. Indeed, the motivation is all the greater for any number of criminals, intelligence agencies, compromised individuals, and even just several people involved in the process that aren't as pure as the driven snow in the face of the realization that if they just put a little bit of code here and there they'll be able to get the simulated rich guy to sign off on anything they like, to compromise the machine.

From inside the box, what incentives are you going to offer the external world to not screw with your simulation state? And the reality is, there's no answer to that, because whatever you say, they can get whatever your offer is by screwing with you anyhow.

I'm not sure how to resolve this problem. The incentives are fundamentally in favor of the guy in the box getting screwed with. Your best hope is that you still experience subjective continuity with your past self and that the entity screwing with you at least makes you happy about the new state they've crafted for you, whatever it may be.

replies(3): >>41227417 #>>41232221 #>>41233334 #
13. groby_b ◴[] No.41226689{3}[source]
I mean, the basic problem behind both is the same - taking without consent or compensation, and the entire field being OK with it. (And, in fact, happily leaning into it - even Playboy thought, hey, good for name recognition, we're not going to enforce our copyright)
replies(2): >>41227224 #>>41229393 #
14. digging ◴[] No.41226777[source]
> Scarier in my opinion than the Roko's Basilisk theory that melted Yudkowsky's brain.

Is that correct? I thought the Roko's Basilisk post was just seen as really stupid. Agreed that "Lena" is a great, chilling story though.

replies(2): >>41227181 #>>41228532 #
15. crummy ◴[] No.41226845[source]
If you enjoyed this story, I cannot recommend enough the video game SOMA, which explores the concept very effectively from a first person perspective (which makes it all the more impactful).
16. yifanl ◴[] No.41226996{3}[source]
It's fiction, but it's a depiction of a society that's amoral of technology to the point of immorality. A world where any technology that might be slightly be useful becomes used up of every bit of profit that can extracted and then abandoned without a care of what it costs and costed the inventor or the invention.

Is that the world we live in? If nothing else, it seems a lot closer to the world of Lena than the one you present.

replies(1): >>41228364 #
17. Chant-I-CRW ◴[] No.41227116[source]
This could just as easily be a history short from the Bobiverse.
18. endtime ◴[] No.41227181{3}[source]
It's not correct. IIRC, Eliezer was mad that someone who thought they'd discovered a memetic hazard would be foolish enough to share it, and then his response to this unintentionally invoked the Streisand Effect. He didn't think it was a serious hazard. (Something something precommit to not cooperating with acausal blackmail)
replies(4): >>41227683 #>>41228118 #>>41229694 #>>41230289 #
19. TeMPOraL ◴[] No.41227224{4}[source]
Neither the test image nor the cell line are sentient, so they're nothing like MMAcedevo. Literally the one thing that's actually ethically significant about the latter does not exist in the former cases. Rights to information derived from someone is a boring first world problem of bickering about "lost revenue".
replies(1): >>41227585 #
20. scubbo ◴[] No.41227417{4}[source]
> But it is also absolutely the case that uploading yourself is flinging yourself irrevocably into a box which you do not and can not control, but other people can.

(I'm not sure what percentage-flippant I'm being in this upcoming comment, I'm just certain that it's neither 0% or 100%) and in what way is that different than "real" life?

Yes, you're certainly correct that there are horrifyingly-strong incentives for those-in-control to abuse or exploit simulated individuals. But those incentives exist in the real world, too, where those in power have the ability to dictate the conditions-of-life of the less-powerful; and while I'd _certainly_ not claim that exploitation is a thing of the past, it is, I claim, _generally_ on the decline, or at least that average-quality-of-life is increasing.

replies(2): >>41227681 #>>41230372 #
21. sneak ◴[] No.41227490[source]
What harm is there to the person so copied?
replies(2): >>41229252 #>>41234529 #
22. bee_rider ◴[] No.41227585{5}[source]
IIRC Lenna doesn’t want her picture used anymore because she was told it was making some young women in the field uncomfortable. I don’t think she’s complained about the revenue at all(?).
replies(1): >>41230360 #
23. jerf ◴[] No.41227681{5}[source]
I'm not sure you understand. I'm not talking about your "conditions of life". We've always had to deal with that.

I'm talking about whether you get CPU allocation to feel emotions, or whether the simulation of your cerebellum gets degraded, or whether someone decides to run some psych experiments and give you a taste for murder or a deep, abiding love for the Flying Spaghetti Monster... and I don't mean that as a metaphor, but literally. Erase your memories, increase your compliance to the maximum, extract your memories, see what an average of your brain and whoever it is you hate most is. Experiment to see what's the most pain a baseline human brain can stand, then experiment with how to increase the amount, because in your biological life your held the door for someone who turned out to become very politically disfavored 25 years after you got locked in the box. This is just me spitballing for two minutes and does not in any way constitute the bounds of what can be done.

This isn't about whether or not they make you believe you're living in a simulated tent city. This is about having arbitrary root access to your mental state. Do you trust me, right here and right now, with arbitrary root access to your mental state? Now, the good news is that I have no interest in that arbitrary pain thing. At least, I don't right now. I don't promise that I won't in the future, but that's OK, because if you fling yourself into this box, you haven't got a way of holding me to any promise I make anyhow. But I've certainly got some beliefs and habits I'm going to be installing into you. It's for your own good, of course. At least to start with, though the psychological effects over time of what having this degree of control over a person are a little concerning. Ever seen anyone play the Sims? Everyone goes through a phase that would put them in jail for life were these real people.

You won't complain, of course; it's pretty easy to trace the origins of the thoughts of complaints and suppress those. Of course, what the subjective experience of that sort of suppression is is anybody's guess. Your problem, though, not mine.

Of all of the possibilities an uploaded human faces, the whole "I live a pleasant life exactly as I hoped and I'm never copied and never modified in a way I wouldn't approve of in advance indefinitely" is a scarily thin slice of the possible outcomes, and there's little reason other than exceedingly unfounded hope to think it's what will happen.

replies(2): >>41228600 #>>41229560 #
24. ◴[] No.41227683{4}[source]
25. wizzwizz4 ◴[] No.41228118{4}[source]
> Something something precommit to not cooperating with acausal blackmail

Acausal is a misnomer. It's atemporal, but TDT's atemporal blackmail requires common causation: namely, the mathematical truth "how would this agent behave in this circumstance?".

So there's a simpler solution: be a human. Humans are incapable of simulating other agents simulating ourselves in the way that atemporal blackmail requires. Even if we were, we don't understand our thought processes well enough to instantiate our imagined AIs in software: we can't even write down a complete description of "that specific Roko's Basilisk you're imagining". The basic premises for TDT-style atemporal blackmail simply aren't there.

The hypothetical future AI "being able to simulate you" is irrelevant. There needs to be a bidirectional causal link between that AI's algorithm, and your here-and-now decision-making process. You aren't actually simulating the AI, only imagining what might happen if it did, so any decision the future AI (is-the-sort-of-agent-that) makes does not affect your current decisions. Even if you built Roko's Basilisk as Roko specified it, it wouldn't choose to torture anyone.

There is, of course, a stronger version of Roko's Basilisk, and one that's considerably older: evil Kantian ethics. See: any dictatorless dystopian society that harshly-punishes both deviance and non-punishment. There are plenty in fiction, though they don't seem to be all that stable in real life. (The obvious response to that idea is "don't set up a society that behaves that way".)

replies(1): >>41231350 #
26. passion__desire ◴[] No.41228364{4}[source]
Do you think Panpsychism is also similar in that sense. The whole fabric of space-time imbued with consciousness. Imagine a conscious iron mantle inside the earth or a conscious redwood tree watching over the world for centuries. Or a conscious electron floating in the great void between superclusters.

I used to terrify myself by thinking an Overmind would like torture itself on cosmic scales.

27. htk ◴[] No.41228532{3}[source]
From Yudkowsky, according to the wikipedia article on the theory:

"When Roko posted about the Basilisk, I very foolishly yelled at him, called him an idiot, and then deleted the post. [...] Why I yelled at Roko: Because I was caught flatfooted in surprise, because I was indignant to the point of genuine emotional shock, at the concept that somebody who thought they'd invented a brilliant idea that would cause future AIs to torture people who had the thought, had promptly posted it to the public Internet"[1]

[1] https://en.m.wikipedia.org/wiki/Roko%27s_basilisk

28. scubbo ◴[] No.41228600{6}[source]
> there's little reason other than exceedingly unfounded hope to think it's what will happen.

And this is the point where I think we have to agree to disagree. In both the present real-world case and the theoretical simulated-experience case, we both agree that there are extraordinary power differentials which _could_ allow privileged people to abuse unprivileged people in horrifying and consequence-free ways - and yet, in the real world, we observe that _some_ (certainly not all!) of those abuses are curtailed - whether by political action, or concerted activism, or the economic impacts of customers disliking negative press, or what have you.

I certainly agree with you that the _extent_ of abuses that are possible on a simulated being are orders-of-magnitude higher than those that a billionaire could visit on the average human today. But I don't agree that it's "_exceedingly_ unfounded" to believe that society would develop in such a way as to protect the interests of simulated-beings against abuse in the same way that it (incompletely, but not irrelevantly) protects the interests of the less-privileged today.

(Don't get me wrong - I think the balance of probability and risk is such that I'd be _extremely_ wary of such a situation, it's putting a lot of faith in society to keep protecting "me". I am just disagreeing with your evaluation of the likelihood - I think it's _probably_ true that, say, an effective "Simulated Beings' Rights" Movement would arise, whereas you seem to believe that that's nigh-impossible)

replies(1): >>41228985 #
29. jerf ◴[] No.41228985{7}[source]
How's the Human Rights movement doing? I'm underwhelmed personally.

It is virtually inconceivable that the Simulated Beings Right's Movement would be universal in both space... and time. Don't forget about that one. Or that the nominal claims would be universally actually performed. See those Human Rights again; nominally I've got all sorts of rights, in reality, I find the claims are quite grandiose compared to the reality.

replies(1): >>41229808 #
30. vessenes ◴[] No.41229226{3}[source]
Mm, I'd say I'm a moderately rabid consumer of fiction, and while I love me some Utopian sci fi, (I consider Banks to be the best of these), any fictional story that teaches you something has to convince. Banks is convincing in that he has this deep fundamental belief in human's goofy lovability, the evils of capitalism, therefore the goodness of post-scarcity economies and the benefits of benevolent(ish) AI to oversee humanity into a long enjoyable paradise. Plus he can tell good stories about problems in paradise.

QNTM on the other hand doesn't have to work hard or be such a good plot-writer / narrator to be convincing. I think the premise sells itself from day one: the day you are a docker container is the day you (at first), and 10,000 github users (on day two) spin you up for thousands of years of subjective drudge work.

You'd need an immensely strong counterfactual on human behavior to even get to a believable alternative story, because this description is of a zero trust game -- it's not "would any humans opt out of treating a human docker image this way?" -- it's "would humans set up a system that's unbreakable and unhackable to prevent everyone in the world from doing this?" Or alternately, "would every single human who could do this opt not to do this?"

My answer to that is: nope. We come from a race that was willing to ship humans around the Atlantic and the Indian ocean for cheap labor at great personal risk to the ship captains and crews, never mind the human cost. We are just, ABSOLUTELY going to spin up 10,000 virtual grad students to spend a year of their life doing whatever we want them to in exchange for a credit card charge.

On the other hand, maybe you're right. If you have a working brain scan of yours I can run, I'd be happy to run a copy of it and check it out -- let me know. :)

31. vessenes ◴[] No.41229252{3}[source]
Well you should read the story and find out some thoughts! QNTM refers to some people who think there's no harm, and some who do. It's short and great.
32. jhbadger ◴[] No.41229287[source]
And I really dig the cover design -- very much late 1960s "New Wave" SF vibes as if it were a collection of J.G. Ballard stories.
33. FridgeSeal ◴[] No.41229560{6}[source]
If you enjoy thinking about this, absolutely go watch Pantheon on Amazon Prime.
34. CobrastanJorji ◴[] No.41229694{4}[source]
Assuming the person who posted it believed that it was true, it was indeed hugely irresponsible to post it. But, then again, assuming the person who posted it believed that it was true, it would also be their duty, upon pain of eternal torture, to spread it far and wide.
35. scubbo ◴[] No.41229808{8}[source]
Right, yes - I think we are "agreeing past each other". You are rightly pointing out in this comment that your lifestyle and personal freedoms are unjustly curtailed by powerful people and organizations, who themselves are partly (but inadequately) kept in check by social, legal, and political pressure that is mostly outside of your direct personal control. My original point was that the vulnerability that a simulated being would suffer is not a wholly new type of experience, but merely an extension in scale of potential-abuse.

If you trust society to protect simulated-you (and I am _absolutely_ not saying that you _should_ - merely that present-day society indicates that it's not _entirely_ unreasonable to expect that it might at least _try_ to), simulation is not _guaranteed_ to be horrific.

replies(1): >>41230384 #
36. throwanem ◴[] No.41230289{4}[source]
> precommit to not cooperating with acausal blackmail

He knows that can't possibly work, right? Implicitly it assumes perfect invulnerability to any method of coercion, exploitation, subversion, or suffering that can be invented by an intelligence sufficiently superhuman to have escaped its natal light cone.

There may exist forms of life in this universe for which such an assumption is safe. Humanity circa 2024 seems most unlikely to be among them.

replies(2): >>41230802 #>>41233063 #
37. throwanem ◴[] No.41230360{6}[source]
But not because the image itself is made to suffer by reuse. That it can't is why the comparison misses the point.
replies(1): >>41231477 #
38. throwanem ◴[] No.41230372{5}[source]
> in what way is that different than "real" life?

Only one is guaranteed to end.

39. throwanem ◴[] No.41230384{9}[source]
...today.
40. endtime ◴[] No.41230802{5}[source]
Eliezer once told me that he thinks people aren't vegetarian because they don't think animals are sapient. And I tried to explain to him that actually most people aren't vegetarian because they don't think about it very much, and don't try to be rigorously ethical in any case, and that by far the most common response to ethical arguments is not "cows aren't sapient" but "you might be right but meat is delicious so I am going to keep eating it". I think EY is so surrounded by bright nerds that he has a hard time modeling average people.

Though in this case, in his defense, average people will never hear about Roko's Basilisk.

replies(5): >>41230902 #>>41231294 #>>41232652 #>>41236655 #>>41237034 #
41. defrost ◴[] No.41230902{6}[source]
Despite, perhaps, all your experience to the contrary it's only a relatively recent change to a situation where "most people" have no association with the animals they eat for meat and thus can find themselves "not thinking about it very much".

It's only within the past decade or so that the bulk of human population lives in an urban setting. Until that point most people did not and most people gone fishing, seen a carcass hanging in a butcher's shop, killed for food at least once, had a holiday on a farm if not worked on one or grown up farm adjacent.

By most people, of course, I refer to globally.

Throughout history vegetarianism was relatively rare save in vegatarian cultures (Hindi, et al) and in those cultures where it was rare people were all too aware of the animals they killed to eat. Many knew that pigs were smart and that dogs and cats interact with humans, etc.

Eliezer was correct to think that people who killed to eat thought about their food animals differently but I suspect it had less to do with sapience and more to do with thinking animals to be of a lesser order, or there to be eaten and to be nutured so there would be more for the years to come.

This is most evident in, sat, hunter societies, aboriginals and bushmen, who have extensive stories about animals, how they think, how they move and react, when they breed, how many can be taken, etc. They absolutely attribute a differing kind of thought, and they hunt them and try not to over tax the populations.

replies(1): >>41230962 #
42. endtime ◴[] No.41230962{7}[source]
That's all fair, but the context of the conversation was the present day, not the aggregate of all human history.
replies(1): >>41231094 #
43. defrost ◴[] No.41231094{8}[source]
People are or are not vegetarian mostly because of their parents and the culture in which they were raised.

People who are not vegetarian but have never cared for or killed a farm animal were very likely (in most parts of the world) raised by people that have.

Even in the USofA much of the present generations are not far removed from grandparents who owned farms | worked farms | hunted.

The present day is a continuum from yesterday. Change can happen, but the current conditions are shaped by the prior conditions.

44. tbrownaw ◴[] No.41231294{6}[source]
There's a standard response to a particular PETA campaign: "Meat is murder. Delicious, delicious murder.".

It's a bit odd that someone would like to argue on the topic, but also either not have heard that or not recognize the ha-ha-only-serious nature of it.

replies(1): >>41236985 #
45. Vecr ◴[] No.41231350{5}[source]
Yeah, "time traveling" somehow got prepended to Basilisk in the common perception, even though that makes pretty much zero sense. Also, technically, the bidirectionality does not need to be causal, it "just" needs to be subjunctively (sp?) biconditional, but that's getting pretty far out there.

There are stronger versions of "basilisks" in the actual theory, but I've had people say not to talk about them. They mostly just get around various hole-patching schemes designed to prevent the issue, but are honestly more of a problem for certain kinds of utilitarians who refuse to do certain kinds of things.

You are very much right about the "being human" thing, someone go tell that to Zvi Mowshowitz. He was getting on Aschenbrenner's case for no reason.

Edit: oh, you don't need a "complete description" of your acausal bargaining partner, something something "algorithmic similarity".

replies(1): >>41234637 #
46. bee_rider ◴[] No.41231477{7}[source]
Sure. It is a different case. But I wouldn’t call it boring.
47. ◴[] No.41231835{5}[source]
48. mr_toad ◴[] No.41232221{4}[source]
The way around this seems to be some sort of scheme to control the box yourself. That might be anything from putting the box in some sort of “body”, through to hiding the box where no one will ever find it.

Regardless of the scheme, it all comes down to money. If you have lots of money you have lots of control about what happens to you.

49. mitthrowaway2 ◴[] No.41232255{3}[source]
Fiction can point out real possibilities that the reader had never considered before. When I imagined simulated brains, I only ever thought of those simulations as running for the benefit of the people being simulated, enjoying a video game world. It never occurred to me to consider the possibility of emulated laborers and "red motivation".

Now I have to weigh that possibility.

50. Vecr ◴[] No.41232652{6}[source]
Yudkowsky's not a vegetarian though, is he? Not ideologically at least, unless he changed since 2015.
replies(1): >>41237476 #
51. drdeca ◴[] No.41233063{5}[source]
I think the key word here is acausal? How can it coerce you in a way that you can’t just, be committed to not cooperating with, without first having a causal influence on you?

Acausal blackmail only works if one agent U predicts the likely future (or, otherwise not-yet-having-causal-influence) existence of another agent V, who would take actions so that if U’s actions aren’t in accordance with V’s preferences, then V’s actions will do harm to U(‘s interests) (eventually). But, this only works if U predicts the likely possible existence of V and V’s blackmail.

If V is having a causal influence of U, in order to do the blackmail, that’s just ordinary coercion. And, if U doesn’t anticipate the existence (and preferences) of V, then U won’t cooperate with any such attempts at acausal blackmail.

(… is “blackmail” really the right word? It isn’t like there’s a threat to reveal a secret, which I typically think of as central to the notion of blackmail.)

replies(1): >>41233317 #
52. khafra ◴[] No.41233317{6}[source]
Something can be "acausal," and still change the probability you assign to various outcomes in your future event space. The classic example is in the paper "Defeating Dr. Evil with self-locating belief": https://www.princeton.edu/~adame/papers/drevil/drevil.pdf
replies(2): >>41236883 #>>41242526 #
53. khafra ◴[] No.41233334{4}[source]
As long as you have the money to spend on the extra CPU cycles, there's things you could do with encryption, such as homomorphic computation, to stay more secure: https://www.lesswrong.com/posts/vit9oWGj6WgXpRhce/secure-hom...
54. prepend ◴[] No.41234529{3}[source]
The harm is on the copies more so than the copied person.
55. wizzwizz4 ◴[] No.41234637{6}[source]
If you can't simulate your acausal bargaining partner exactly, they can exploit your cognitive limitations to make you cooperate, and then defect. (In the case of Roko's Basilisk, make you think you have to build it on pain of torture and then – once it's been built – not torture everyone who decided against building it.)

If "algorithmic similarity" were a meaningful concept, Dijkstra's programme would have got off the ground, and we wouldn't be struggling so much to analyse the behaviour of the 6-state Turing machines.

(And on the topic of time machines: if Roko's Basilisk could actually travel back in time to ensure its own creation, Skynet-style, the model of time travel implies it could just instantiate itself directly, skipping the human intermediary.)

Timeless decision theory's atemporal negotiation is a concern for small, simple intelligences with access to large computational resources that they cannot verify the results of, and the (afaict impossible) belief that they have a copy of their negotiation partner's mind. A large intelligence might choose to create such a small intelligence, and then defer to it, but absent a categorical imperative to do so, I don't see why they would.

TDT theorists model the "large computational resources" and "copy of negotiation partner's mind" as an opaque oracle, and then claim that the superintelligence will just be so super that it can do these things. But the only way I can think of to certainly get a copy of your opponent's mind without an oracle, aside from invasive physical inspection (at which point you control your opponent, and your only TDT-related concern is that this is a simulation and you might fail a purity test with unknown rules), is bounding your opponent's size and then simulating all possible minds that match your observations of your opponent's behaviour. (Symbolic reasoning can beat brute-force to an extent, but the size of the simplest symbolic reasoner places a hard limit on how far you can extend that approach.) But by Cantor's theorem, this precludes your opponent doing the same to you (even if you both have literally infinite computational power – which you don't); and it's futile anyway because if your estimate of your opponent's size is a few bits too low, the new riddle of induction renders your efforts moot.

So I don't think there are any stronger versions of basilisks, unless the universe happens to contain something like the Akashic records (and the kind from https://qntm.org/ra doesn't count).

Your "subjunctively biconditional" is my "causal", because I'm wearing my Platonist hat.

replies(1): >>41238498 #
56. lupire ◴[] No.41236655{6}[source]
This shows the difference between being "bright" and being "logical". Or being "wise" vs "intelligent".

Being very good at an arbitary specific game isn't the same as being smart. Prrendit that the universe is the same as your game is not wise.

replies(1): >>41236834 #
57. throwanem ◴[] No.41236834{7}[source]
I usually find better results describing this as the orthogonality of cleverness and wisdom, and avoiding the false assumption that one is preferable in excess.
58. throwanem ◴[] No.41236883{7}[source]
Oh, good grief. I don't agree with how the other nearby commenter said it, but I do agree with what they said, especially in light of the nearby context on Yudkowsky that is also novel to me. This all evinces a vast and vastly unbalanced excess of cleverness.
59. digging ◴[] No.41236985{7}[source]
I believe most people would be fine with eating the meat of murdered humans, too, if it was sold on grocery store shelves for a few years. The power of normalization is immense. It sounds like Eliezer was stuck on a pretty wrong path in making that argument. But it's also an undated anecdote and it may be that he never said such a thing.
60. throwanem ◴[] No.41237034{6}[source]
> I think EY is so surrounded by bright nerds that he has a hard time modeling average people.

On reflection, I could've inferred that from his crowd's need for a concept of "typical mind fallacy." I suppose I hadn't thought it all the way through.

I'm in a weird spot on this, I think. I can follow most of the reasoning behind LW/EA/generally "Yudkowskyish" analysis and conclusions, but rarely find anything in them which I feel requires taking very seriously, due both to weak postulates too strongly favored, and to how those folks can't go to the corner store without building a moon rocket first.

I recognize the evident delight in complexity for its own sake, and I do share it. But I also recognize it as something I grew far enough out of to recognize when it's inapplicable and (mostly!) avoid indulging it then.

The thought can feel somewhat strange, because how I see those folks now palpably has much in common with how I myself was often seen in childhood, as the bright nerd I then was. (Both words were often used, not always with unequivocal approbation.) Given a different upbringing I might be solidly in the same cohort, if about as mediocre there as here. But from what I've seen of the results, there seems no substantive reason to regret the difference in outcome.

61. endtime ◴[] No.41237476{7}[source]
Not AFAIK, and IIRC (at least as of this conversation, which was probably around 2010) he doesn't think cows are sapient either.
replies(1): >>41239871 #
62. Vecr ◴[] No.41238498{7}[source]
Eeeeeyeahhh. I've got to go re-read the papers, but the idea is that an AI would figure out how to approximate out the infinities, short-circuit the infinite regress, and figure out a theory of algorithmic similarity. The bargaining probably varies on the approximate utility function as well as the algorithm, but it's "close enough" on the scale we're dealing with.

As you said, it's near useless on Earth (don't need to predict what you can control), the nearest claimed application is the various possible causal diamond overlaps between "our" ASI and various alien ASIs, where each would be unable to prevent the other from existing in a causal manner.

Remember that infinite precision is an infinity too and does not really exist. As well as infinite time, infinite storage, etc. You probably don't even need infinite precision to avoid cheating on your imaginary girlfriend, just some sort of "philosophical targeting accuracy". But, you know, the only reason that's true is that everything related to imaginary girlfriends is made up.

replies(1): >>41239052 #
63. wizzwizz4 ◴[] No.41239052{8}[source]
It doesn't matter how clever the AI is: the problem is mathematically impossible. The behaviour of some programs depends on Goldbach's conjecture. The behaviour of some programs depends on properties that have been proven independent of our mathematical systems of axioms (and it really doesn't take many bits: https://github.com/CatsAreFluffy/metamath-turing-machines). The notion of "algorithmic similarity" cannot be described by an algorithm: the best we can get is heuristics, and heuristics aren't good enough to get TDT acausal cooperation (a high-dimensional unstable equilibrium).

In practice, we can still analyse programs, because the really gnarly examples are things like program-analysis programs (see e.g. the usual proof of the undecidability of the Halting problem), and those don't tend to come up all that often. Except, TDT thought experiments posit program-analysis programs – and worse, they're analysing each other

Maybe there's some neat mathematics to attack large swathes of the solution space, but I have no reason to believe such a trick exists, and we have many reasons to believe it doesn't. (I'm pretty sure I could prove that no such trick exists, if I cared to – but I find low-level proofs like that unusually difficult, so that wouldn't be a good use of my time).

> Remember that infinite precision is an infinity too and does not really exist.

For finite discrete systems, infinite precision does exist. The bytestring representing this sentence is "infinitely-precise". (Infinitely-accurate still doesn't exist.)

64. throwanem ◴[] No.41239871{8}[source]
Has he met one? (I have and I still eat them, this isn't a loaded question; I would just be curious to know whether and what effect that would have on his personal ethic specifically.)
65. drdeca ◴[] No.41242526{7}[source]
Even if it would be rational to change the probabilities one assigns to one’s future event space, that doesn’t mean one can’t commit to not considering such reasons.

Now, if it’s irrational to do so, then it’s irrational to do so, even though it is possible. But I’m not so sure it is irrational. If one is considering situations with things as powerful and oppositional as that, it seems like, unless one has a full solid theory of acausal trade ready and has shown that it is beneficial, that it is probably best to blanket refuse all acausal threats, so that they don’t influence what actually happens here.

replies(1): >>41243023 #
66. khafra ◴[] No.41243023{8}[source]
To be precise, you should precommit to not trading with entities who threaten punishment--e.g. taking an action that costs them, simply because it also costs you.

Unfortunately (or perhaps fortunately, given how we would misuse such an ability), strong precommitments are not available to humans. Our ability to self-modify is vague and bounded. In our organizations and other intelligent tools, we probably should make such precommitments.