On Karma

(Epistemic status:  I might just be misusing/misunderstanding karma and this entire concept is trivial – still, the epistemic status is endorsed)

We’ve all heard the phrase “What goes around comes around.”  In popular culture, this has been shortened further to an appropriation of the concept of karma.  When we think about karmic justice, we think about the ways in which people get their just desserts.  At least in western thought, it’s frequently rounded to the Just World Fallacy and derided as such.  The idea of karma, however, is much more complex than that – if you think about it probabilistically, it makes perfect sense as behavioral guidance.

Actions we take often have a probability of costing some resource, either locally or globally.  They also often have a probability of creating some resource – either the same one, transferring from local to global or vice versa, or a different resource, either on the same scale or a different one.  Some of these resources are qualitative rather than quantitative.  Regardless of the object level resource involved, it is not always clear how much will be consumed and how much will be provided, and to what scale.  How this ties to karma is that the concept is fundamentally trying to incentivize actions that have more probability of creating resources.  Positive karma actions are those that have a probability of increasing resources.  When you help someone out in some way, you are giving up some of your time/energy to take on some of their burden, which has downstream effects and increases the amount of resources in “circulation”.   However, sometimes when you try to help someone out, you actively make things harder for them because you don’t fully understand the situation or because of other factors – it is better karma to minimize this probability, but if the probability of things producing the resource is sufficiently high, it is still good karma even if the actual outcome was negative.

On the flip side, negative karma actions are those that with high probability of decreasing resources.  This is the kind of thing where you take from the commons to enrich yourself in some way, be it environmentally destructive, decreasing of social trust, etc.  It might be that the actual consequence of what you’ve done produces a lot of resources, but has a ton of externalities – this is still negative karma because in most worlds, those externalities did not get resolved and it’s not an action that should be taken.

Now, thus far, this interpretation of karma mostly sounds like deontology in an exotic wrapper – I think where it gets interesting is how it applies not to instances of behavior but patterns of behavior.  I’ve used instances and actions as examples to make the concept easier to see, but the real point of karma is not the probability on a single action scale, it is the probability that an algorithm will enrich the world around it.  Essentially, reincarnation is the idea of putting algorithms in different bodies to see what they do – and experiences and meditation are ways to retrain that algorithm in some ways.  I don’t believe in strict reincarnation (though in a way, each moment we live that contains an instance of ourselves is a reincarnation – that instance has an algorithm that is going to be quite similar to other instances but has likely undergone some changes even on the moment to moment level).  However, if we accept a karmic frame for these algorithm tests, essentially it asserts that the algorithm has a higher chance of being rewarded (being used in contexts that make the algorithm “happier”, except sometimes it starts failing karmic tests, and thusly falls again, and this is why meditation and breaking the cycle are important because they effectively optimize for algorithms with really good metaprogramming skills).  While deontology largely optimizes for right actions that work well even if everyone does them, (this conception of) karma is a little more individualized while optimizing for collectivism.

In practice, I think karma is essentially decision theory – you are an algorithm that is likely to repeat actions that fulfill a reward function, and sometimes there is an action space, and probabilistically some actions will use more resources than others, and the iteration is what determines whether people should cooperate with you or not – hence, if you do a lot of positive karma things, you’re probably safe to cooperate with – if you do a lot of negative karma things, well, maybe defecting makes more sense when making decisions concerning you.

Overall, I find that it is easier to consider the “goodness” and “badness” of my choices with this frame – rather than trying to figure out a rule a choice follows, or trying to calculate the actual downstream personal utility every time I do something, the middle ground of considering the probability that this action will bite me or other people in the ass later seems quicker and more likely to lead to better outcomes over a long period of time.

Discussion questions:  What are ways you have thought about karma in the past?  Does this conception seem more useful in any way?  What does it look like for you to consider the probability of resources consumed versus the probability of resources created?

On Social Harmony, Truth, and Building a Culture

(Epistemic Status:  Trying on a new approach – likely framework agnostic; literally thought of this tonight)

Have you ever tried just telling someone the truth without filters?  Saying what you think, why you think it, critically considering what they’re saying, and not trying to be acceptable?  Sharp culture has aspects of this; most other cultures I’ve experienced do not – there’s a drive to keep things smooth and harmonious.  “Brutally honest” is often decried as uncompassionate – it’s perceived most frequently (and often correctly) as a bid for dominance over someone under the cover of helping them.  Social harmony is armor in low trust environments – it keeps the peace and allows you to act slowly on things without having to deal with attacks on all sides in status games.

So what happens in cultures where you don’t have to put that armor on?

Societal norms around politeness are not optimized for growth and change.  They are optimized for perpetuation of the existing structure and building on that structure slowly.  Sayings like “if you don’t have anything nice to say, don’t say anything at all”, misinterpretations of “treat others as you would like to be treated”, “the truth is often in the middle”, these are all optimized for continuing the status quo and sparing the feelings of others, at the expense of their cognition.

Unfortunately, it turns out that telling lies and buying into narratives damages your ability to think – at best, you can sandbox your narratives when interacting with low trust environments, but if you spend most of your time with people optimizing for harmony, it is inevitable your thought patterns will become more and more corrupt in favor of the status quo and validation – your heavy armor slows you down, and you take a lot more hits than you really need to.  It gets dings and scratches, and you get scars.

If you’re with people you can trust to hear what you’re saying when you aren’t filtering for niceness, for harmony, however…

You become a lot faster.  You take off your armor.  You take out a fencing epee – you aren’t trying to hurt everyone around you for keeps, but you train.  You spar and build stronger models.  Your group, your society mutually agrees to rules of engagement that are meant to improve you and those around you.  Rather than having your epistemics atrophy, they become more rigorous.  This is not even a new idea – Ben Franklin had the concept of the junto, a society of mutual improvement.  As quoted from his autobiography:  “Our debates were to be under the direction of a president, and to be conducted in the sincere spirit of inquiry after truth, without fondness for dispute or desire of victory; and to prevent warmth, all expressions of positiveness in opinions, or direct contradiction, were after some time made contraband, and prohibited under small pecuniary penalties.”

 

The maxims of this society are “move fast, break things”, “treat others as you would like to be treated”, “you are the sum of the five people closest to you”.  The ability to think is always the terminal value.

This does not mean that you never don the heavy armor again – we live in a world where most societies are low trust.  Where most interactions require not letting the politeness tools atrophy either.  You need to be able to wear this armor even within your high trust society – but rather than it being the default assumption, it is better for it to be an active action that is respected by the rest of the group – it still relies on truth in introspection.

There are critical ingredients to being the kind of person who can exist in a truth-based culture, and you will fail, and you will be told that you have failed, and this will be ok.  The responsibility to accept criticism is the foundation.  This does NOT mean wordlessly submitting to someone else’s evaluation of you; this is act of engagement in good faith with other models of you, generated from the outside.  You also take on the responsibility to criticize.  These aspects alone create a feedback loop of honest feedback rather than noise. These form the core of a truth-based culture.  The mechanisms of social capital are not that of criticism being a status scoring activity for the criticizer, but of increased status for both the criticized and the criticizer.

The next layer of this culture involves strong norms against performance.  This is hard – to some degree everything will be performative.  Critical sessions will be hosted just for selfish gain and wheel spinning and change won’t occur.  People will make cutting remarks and default to “Well I was just being honest”.  However, the people who take the norms in good faith will start becoming noticeably stronger – the performers will be weeded out purely by differentials.  Related to this is a norm where apologies are only made for lies or actual harms caused.  Apologies for “hurting feelings”, “not being good enough”, things in that vein are often performative – they aren’t legitimately good faith intent to be better.  Lying by far is a deep sin when this culture is working well.  On this level as well is a removal of social status for mutual validation.  Validation is social candy – validation-seeking behavior is a mode antithetical to accomplishing things – it’s a substitute for action.  There are things underneath validation-seeking that should be introspected, noted, and expressed.  However, validation seeking and phatic validation giving are anti-truth and anti-action.

On the next layer, things get self-critical.  The above guidelines are not meant to be enforced on others.  All the time you spend hating and punishing is time not spent being better yourself.  Guilt spirals and censure spirals are both poison to this truth-based culture.  Optimize for having real conversations, with disagreements, criticisms, and truth, but don’t waste time noticing all the times this standard isn’t reached, unless the conversation is meant to be about the standard.  The meta escape valve is also phatic and anti-truth.  Notice when you are telling stories or trying to socially maneuver yourself out of a situation – and then express your feeling directly.  Not as a social move, but as information about the world.  Go back to the cardinal rules – say true things, take the responsibility of being criticized, take the responsibility to criticize others.  Then rebuild your layers.

On the next layer, action spaces are less constrained.  This is where allowances for putting on the heavy armor of harmony are allowed.  When your foundation of truth is strong, playing in social reality and narrative becomes safer – it’s still hazardous, but this truth-based culture is meant to allow gentle detox from it.  Sometimes this means wearing armor in this space.  Sometimes this means status jockeying, playing social games.  Sometimes this means trying things out that are only useful in the wider world.  When your social context has passed the lower layers, these can be done.

Overall, I think the above culture is possible – the old Less Wrong community had this in spades – a lot more was understood about what it meant to have a project and be status blind with it.  I am not yet a member who would qualify for this culture, but I strive for it.  I wish to build this, ideally following the model of the junto and other social technologies.  If you also wish to build this, or be a part of it, and experiment with social technology, I want to get to know you better.  I want to shed the heavy armor and move fast, and strike with an epee.  I want to be around people who do this as well.

Discussion Questions:  How much of the world around you requires you to harmonize?  How do you maintain your epistemic purity in environments optimized for making it hard to think?  What is your vision for ideal societal norms?  How would you build a high trust environment?  How would you improve the model described above?