Part of the Retrocausal Engineering Sequence
(Epistemic status: Endorsed – content warning: death, hell, basilisks, time travel infohazards)
If there exists anything worse than death, morality is impossible.
This is a strong statement, but I think that its meaning becomes clear if you think about it some. Let us say that hell exists – an eternal place of torment. This is clearly worse than death (or at least, most people would frame it that way). What determines your morality in a world where there is a hell? Effectively, the entity that decides if you go there or not. You can basically follow the rules of this entity and call it morality, but this doesn’t actually teach you on the ground moral reasoning. Alternatively, you could rebel – fight this entity tooth and nail because a just powerful entity would never make a hell – but you’re still working within that entity’s frame.
Let’s try another frame. Let us say you’re a time traveler and it’s possible to get stuck in a loop where you can’t affect anything (your actions can change, just the outcome won’t) and you see the same outcome, over and over. What determines your morality? From the inside, very little – the superego wears down time after time, and eventually it becomes impossible to care. From the outside, well, the entity that can send you into a time loop – right? Not quite – that’s one option…but the other is essentially power. If you have enough power, the time loop cannot happen, and it becomes very attractive to stop at nothing to accumulate power.
Let’s try another frame. Let us say that there’s an AI that can simulate every moment of pain and suffering you have experienced and will experience, at high enough fidelity that there exist yous that will experience this…and then the subjective time of those simulations is stretched arbitrarily, what determines your morality? Quite clearly the AI is going to get some acausal blackmail on.
We can keep going, but I think my point is made clear. There is more, however.
If there exists anything better than life, morality is impossible.
Let us say that heaven exists – an eternal place of equanimity. This is clearly better than life. What determines your morality in a world where you can go to heaven? The entity that decides if you go there or not – if you follow the rules of this entity and call it morality, maybe you too can get the heaven.
Let us try another frame – you’re a time traveler who’s finally finished their job. When do you go to rest – probably a loop of comfort and goodness where things are good (let us call this the Finally). In that case, your morality is likely determined by whatever it takes to get the job done so you can go to that loop of hope and joy. On the plus side, from the inside of the loop you’re probably unlikely to try to upset it, so it’ll be at least somewhat stable. From the outside though, who knows what things you’ll do in the name of the Finally – and what sort of person you’ll be by the time you get there. Now it’s a bit more complicated – do you decide your morality, or entities involved in the Finally, or is it just the power to seize the Finally?
Let us try another frame – what do you do if an AI promises you the most luxurious, pleasant simulations, calibrated to exactly your tastes in living, and stretches out the moments that are the best parts? You’re probably going to feel fairly good about doing whatever it says to get that outcome, right – or do whatever it takes to seize that simulation for yourself.
The worst part, of course, is when you put all this together – if you allow frames where there are outcomes better than life, or worse than death, your morality becomes a much more difficult problem to solve and you are much more easily blackmailed (or bribed). To some extent, maintaining the ability to be blackmailed or bribed this way is an important part of being human and existing in a given infrastructure. However, as one goes deeper into the frontier, one has to be able to define things for themselves – and if something or someone has you acausally blackmailed, you can easily get into trouble you can’t get out of.
Fortunately, there are antidotes to these problems. The first is tribe – friends, family, anyone who can help you out if you get yourself into a bind like any of the above. I would expect most fates worse than death or better than life to essentially be within the mind (since experience by definition is a representation of a ground reality rather than reality itself). It should therefore be breakable externally, even if you forget there’s an externally. The second is experience – to have gone through equivalent experiences with enough wisdom and grace to hold yourself to your values even under the worst conditions. Practice does in fact bring one closer to perfection – as for how one experiences some of these outcomes while being able to return to reality, there are several mental, pharmacological, and virtual practices one can research at their leisure. The last, of course, is to just not be here – the sky is a dangerous place with a lot of unexplored territory. The life of someone on the earth is not a bad one, even if it’s mostly just hard work and dirty play.
Overall, as one gains power, one gains more responsibility, and one of those responsibilities is deciding what is moral, acceptable, and what tradeoffs you, yourself, are willing to make. There are quite a few ways this responsibility can be corrupted by external threats or promises – I believe Buddhism actually covers some of these outcomes with the concept of the “God realms”. At the end of the day though, regardless of your context, all you can do is remember to treat people as people – because once you start doing otherwise, there’s no reason for your own personhood to be respected.
Addendum – an additional solution suggested while discussing this with some friends is that you give cycles to these concepts roughly equal to to the probability they happen – and try to be correctly calibrated on that probability. Essentially, if something has a 0.001% chance of happening, don’t spend more than 0.001% of your thought cycles on it.
Discussion questions: Do you have something better than life or worse than death in your ontology? How does it affect your decision making if so? Have you spent time thinking about how you would conduct yourself in the absence of these incentives? If you are free of these incentives, what do you consider to be your moral compass?