On Karma

(Epistemic status:  I might just be misusing/misunderstanding karma and this entire concept is trivial – still, the epistemic status is endorsed)

We’ve all heard the phrase “What goes around comes around.”  In popular culture, this has been shortened further to an appropriation of the concept of karma.  When we think about karmic justice, we think about the ways in which people get their just desserts.  At least in western thought, it’s frequently rounded to the Just World Fallacy and derided as such.  The idea of karma, however, is much more complex than that – if you think about it probabilistically, it makes perfect sense as behavioral guidance.

Actions we take often have a probability of costing some resource, either locally or globally.  They also often have a probability of creating some resource – either the same one, transferring from local to global or vice versa, or a different resource, either on the same scale or a different one.  Some of these resources are qualitative rather than quantitative.  Regardless of the object level resource involved, it is not always clear how much will be consumed and how much will be provided, and to what scale.  How this ties to karma is that the concept is fundamentally trying to incentivize actions that have more probability of creating resources.  Positive karma actions are those that have a probability of increasing resources.  When you help someone out in some way, you are giving up some of your time/energy to take on some of their burden, which has downstream effects and increases the amount of resources in “circulation”.   However, sometimes when you try to help someone out, you actively make things harder for them because you don’t fully understand the situation or because of other factors – it is better karma to minimize this probability, but if the probability of things producing the resource is sufficiently high, it is still good karma even if the actual outcome was negative.

On the flip side, negative karma actions are those that with high probability of decreasing resources.  This is the kind of thing where you take from the commons to enrich yourself in some way, be it environmentally destructive, decreasing of social trust, etc.  It might be that the actual consequence of what you’ve done produces a lot of resources, but has a ton of externalities – this is still negative karma because in most worlds, those externalities did not get resolved and it’s not an action that should be taken.

Now, thus far, this interpretation of karma mostly sounds like deontology in an exotic wrapper – I think where it gets interesting is how it applies not to instances of behavior but patterns of behavior.  I’ve used instances and actions as examples to make the concept easier to see, but the real point of karma is not the probability on a single action scale, it is the probability that an algorithm will enrich the world around it.  Essentially, reincarnation is the idea of putting algorithms in different bodies to see what they do – and experiences and meditation are ways to retrain that algorithm in some ways.  I don’t believe in strict reincarnation (though in a way, each moment we live that contains an instance of ourselves is a reincarnation – that instance has an algorithm that is going to be quite similar to other instances but has likely undergone some changes even on the moment to moment level).  However, if we accept a karmic frame for these algorithm tests, essentially it asserts that the algorithm has a higher chance of being rewarded (being used in contexts that make the algorithm “happier”, except sometimes it starts failing karmic tests, and thusly falls again, and this is why meditation and breaking the cycle are important because they effectively optimize for algorithms with really good metaprogramming skills).  While deontology largely optimizes for right actions that work well even if everyone does them, (this conception of) karma is a little more individualized while optimizing for collectivism.

In practice, I think karma is essentially decision theory – you are an algorithm that is likely to repeat actions that fulfill a reward function, and sometimes there is an action space, and probabilistically some actions will use more resources than others, and the iteration is what determines whether people should cooperate with you or not – hence, if you do a lot of positive karma things, you’re probably safe to cooperate with – if you do a lot of negative karma things, well, maybe defecting makes more sense when making decisions concerning you.

Overall, I find that it is easier to consider the “goodness” and “badness” of my choices with this frame – rather than trying to figure out a rule a choice follows, or trying to calculate the actual downstream personal utility every time I do something, the middle ground of considering the probability that this action will bite me or other people in the ass later seems quicker and more likely to lead to better outcomes over a long period of time.

Discussion questions:  What are ways you have thought about karma in the past?  Does this conception seem more useful in any way?  What does it look like for you to consider the probability of resources consumed versus the probability of resources created?