I’ve heard from a number of secular-ish sources (Carse, Girard, Arendt) that the essential contribution of Christianity to human thought is the concept of forgiveness. (Ribbonfarm also has a recent post on the topic of forgiveness.)

I have never been a Christian and haven’t even read all of the New Testament, so I’ll leave it to commenters to recommend Christian sources on the topic.

What I want to explore is the notion of kindness without a smooth incentive gradient.

Most human kindness is incentivized. We do things for others, and get things in return. Contracts and favors alike are reciprocal actions. And this makes a lot of sense, because trade is sustainable. Systems of game-theoretic agents that do some variant of tit-for-tat exchange tend to thrive, compared to agents that are freeloaders or altruists. Freeloaders can only exploit so long until they destroy the system they’re exploiting, or suffer from the retribution of tit-for-tat players; pure altruists burn themselves out quickly.

Sometimes kindness is reciprocated at the genetic rather than the personal level (see kin selection.)

Sometimes it’s reciprocated by long-term or indirect means — you can sometimes get social credit for being kind, even if the person you help can’t directly reciprocate. A reputation for generosity _to allies and innocents _makes you look strong and worth allying with, so you come out ahead in the long run.

And one of the ways we implement the incentives towards kindness in practice is through sympathy. When we see another’s suffering, we feel an urge to be kind to them, and a warm fuzzy reward if we help them. That way, kindness is feasible along local emotional incentive gradients.

But, of course, sympathy itself is carefully optimized to make sure we only sympathize with those whom we’d come out ahead by helping. Sympathy is not merely a function of suffering. It is easier to sympathize with children than with adults, with the grateful than the ungrateful, with those who have experienced culturally acceptable “grounds for sympathy” (such as divorce, loss of a loved one, illness, job loss, crime victimization, car trouble, or fatigue, according to this sociological study). We sympathize more with those whose suffering is perceived as unjust — though this may be something of a circular notion.

This leaves out certain forms of suffering.

  • The stranger, who is not part of your group, will receive less sympathy. So will the outsider or social deviant.
  • The person with a permanent problem that can’t be easily fixed will eventually receive less sympathy, because he cannot be restored to happiness and in a position to show gratitude or return favors.
  • The overly self-reliant person will receive less sympathy; if sympathy is like a “credit account”, the person who has never opened one will be offered less credit than one who maintains a modest balance. We require vulnerability and a show of weakness before our sympathy will turn on.
  • The angry or assertive person who does not show gratitude or deference will receive less sympathy. Appeasement displays evoke sympathy and reconciliation.
  • The person whose suffering takes an illegible form will receive less sympathy.

To be a recipient of sympathy one must be both weak and strong; weak, to show one really has received a misfortune; strong, to show one can be a useful ally someday. Children are the perfect example, because they are small and vulnerable today, but can grow to be strong as adults. The victims of temporary and easily reversible bad luck are in a similar position: vulnerable today, but soon to be restored to strength. Permanently disadvantaged adults, people who may be poor/disabled/nonwhite/etc and have developed the self-reliance or resentment associated with coping with long-term deprivation that isn’t going away, are less easy to sympathize with.

Some of this has been shown experimentally; subjects in an experiment who viewed other subjects appearing to receive electric shocks were more unsympathetic when they were told the shocks would continue _in a subsequent session, versus when they were told the shocks had ended, or when they were told that their choices could stop the shocks. _Permanent _ suffering is less sympathetic than _temporary or fixable suffering.

Sympathy provides an immediate emotional incentive to respond to suffering with kindness, and it’s pretty well calibrated to be “good game theory” — but it’s not perfect by any means.

Cooperation Without Sympathy

Imagine a space alien — a grotesque creature, one whose appearance makes you want to vomit — offers you a deal. Let’s say this alien is, like the creatures in Octavia Butler’s _Xenogenesis _trilogy, a “gene trader”, one who can splice DNA with its bodily organs, and has a drive towards genetic engineering analogous to what Earth animals experience as a sex drive. If you have “sex” with the alien and produce part-alien babies, it will give you and your children access to the vastly advanced powers in its alien genes, in exchange for gratifying its biological urge and allowing it to benefit from your genes.

From an intuitive standpoint, this is grotesque. The alien is not sexy. You cannot feel compassion for its desires to trade genes with you. It feels violating, disgusting, unacceptable. You were never evolved to want to breed with aliens.

And yet the game theory is sound. Superpowers are a grand thing to have. Even sexiness exists as a way to incentivize you to have strong children — and your alien children will undoubtedly be strong.

It’s a game-theoretic win-win but not a sympathetic win-win. Other humans will not find your alien babies sympathetic, or your choice to cooperate with the aliens a pro-social one.

It’s a sort of betrayal against your fellow humans, in that you are breaking the local game of “sex is between humans” and unilaterally gaining superpowered alien babies; but it’s a choice that any human could make as easily as you, so you aren’t leaving others _permanently _worse off, or depleting a valuable commons. Since all humans would be better off with alien genes, it’s not really a “defection” if you take the lead in doing something that would be beneficial if done by everyone.

Butler is really good at expressing how a “peaceful win-win” — on paper, an obviously correct choice — can feel disgusting. Sympathy incentives can’t get you to win-win cooperation, if the thing that the other person wants is not something that _you _can imagine wanting.

This is an example of incentives for cooperation being present _but not _smooth. It is in your interest to “gene trade”, but you only know that intellectually; you cannot be guided to it naturally through sympathy.

In the same way, helping someone “unsympathetic” but valuable is a “good investment” but doesn’t feel like it. You often hear about this in disability contexts. “All you have to do is give me a relatively cheap accommodation and suddenly I become way more productive! How is this not a good deal for you?” Well, it may be a good _deal but not a _sympathetic _deal, because people’s mental accounting doesn’t match reality; if they think that the person “ought to be able to” get along without the accommodation, sympathy doesn’t provoke them to help, and if they don’t have a strong intuitive sense of people being _plastic, _so that they function differently in different environments, they don’t _really intuitively believe _that a blind person can be an expert programmer if given a screen reader, for instance. Abstractly it’s a good deal, but concretely it’s _not being guided smoothly by emotional gradients, it requires an act of detached cognition.

In practice, you can guide a situation back to sympathy, and that’s usually the best way to get the trade done. Try to play up the sympathetic qualities of the trade partner, try to analogize the requested action to things that are considered moral duties in one’s social context. Try to set up emotional guardrails, engineer the social environment so the deal can be done without abstract thought.

But this isn’t really feasible for a single individual to do. If you’re alone _and nobody wants to help you, even if you reciprocate, because you’re not a “sympathetic character”, you can’t reshape social pressures to make yourself sympathetic all by yourself. If we aren’t going to brutally destroy the lives of valuable people who don’t already have a posse, somebody is going to have to _think, to go beyond gradient-following.

I think that to get the best results, thought is actually necessary. By “thought” I mean the God’s-eye view, the long-view, the ability to ask “where do I want to go?” and potentially have an answer that isn’t “whichever way I’m currently going.” But what emotional _or _psychological _or _behavioral _scaffolding promotes thought? We are, after all, made of meat. Since sometimes humans do think, there must be a way to _build thought out of meat. I’m still trying to understand how that is done.

Forgiveness and the Very Long Term

Forgiveness, on a structural level, is choosing not to call in a debt. I’m _entitled _to compensation, according to the rules of whatever game I’m playing, but I don’t demand it.

Forgiveness is a local loss to the forgiver. If everyone forgave everything all the time, it wouldn’t be remotely sustainable.

But a little bit of forgiveness is useful, in exactly the same way that bankruptcy is useful. Bankruptcy means that there’s a floor to how much debt you can get in, which allows loss-averse humans to be willing to take on debt at all, which means that more high-expected-value investments get made.

Tit-for-tat with forgiveness outperforms plain tit-for-tat.

You can also think of forgiveness as a function of time. If you expect that someone will be net positive to you in the long run, you can accept them costing you in the short run, and not demand payment now. In other words, you extend them cheap credit. As your time horizon goes to infinity (or your discount rate goes to zero), it can become possible to not demand payment at all, to forgive the loan entirely. If it doesn’t matter whether they pay you back tomorrow, or in a hundred years, or in a thousand, but you expect them to be able to pay someday, then you don’t really need the repayment at _any _time, and you can drop it.

This is sort of similar to the heuristic of “be tolerant and kind to all persons, you never know when they might be valuable.” The fairy tales and myths about being kind to strangers and old ladies, in case they’re gods in disguise. You don’t want to burn bridges with anybody, you don’t want to kick anybody wholly out of the game, if you expect that eventually (and eventually may be very long indeed, and perhaps not within your lifetime), this will pay off.

Tit-for-tat or reinforcement-learning or behaviorism — reward what you want to see, punish what you don’t — makes a lot of sense, _except _when you factor in time and death. If you punish someone so hard that they die before they have a chance to turn around and improve, you’ve lost them.

And, on a more abstract level: it can make sense to disincentivize the slightly worse thing in general, that’s how evolution works, but that leads to things like rare languages dying out. Yes, it’s perfectly rational to speak Spanish rather than Zapotec, and Zapotec-speakers need to make a living too, but my inner Finite and Infinite Games says “wouldn’t you like to preserve Zapotec from dying out altogether? Couldn’t it come in handy someday?” Language preservation is an example of preserving a “loser” because, if the world went on forever, nothing would be permanently guaranteed to lose.

It’s like having a slightly noisy update mechanism. Mostly, you reinforce what works and penalize what doesn’t. But sometimes, or to a small degree, you forgive, you rescue someone or something that would ordinarily be penalized, and save it, in case you need it later. In gradient descent, a little stochasticity keeps you from getting stuck on local maxima. In economics, a little bankruptcy or the occasional jubilee keeps you from getting stuck in stagnant, monopolistic conditions. You don’t ruthlessly weed out the “bad” all the time.

Sometimes you throw some resources at someone who “doesn’t deserve them” just in case you’re wrong, or to get out of the nasty feedback loops where someone behaves badly in response to being treated badly. If you unilaterally gave them some help, you might allow them to escape into a cooperative, reciprocal-benefit situation, which you’d actually like better! Even if this didn’t work one particular time, doing it in general, at some frequency, might in expectation work out in your favor.

A sense of the very long term may also make sympathy easier, because in the very long term nothing is permanent and everything is eventually mutable. If permanent suffering is what makes people unsympathetic, then a sense of the very long term makes it possible to realize that under different circumstances that person might become fine, and thus their suffering is ultimately the “temporary kind” that can elicit sympathy. “The stone that the builders rejected/ has become the cornerstone” — well, if you wait long enough, that might actually happen. Things could change; the “loser”‘s or “villain”‘s status on the bottom is not eternal; so with a long-enough-term mindset it’s not actually appropriate to treat him as definitively a “loser” or a “villain.”

Forgiveness can be a lot easier to implement than “cooperation without sympathy”, which requires you to actually ascertain where win-wins are, with your mind. You can _mindlessly _add a little forgiveness to a system. Machine-learning algorithms can do it. Which may make it a useful tool in the process of “trying to build thought out of meat.”