Player vs. Character
Epistemic Status: Confident
This idea is actually due to my husband, Andrew Rettek, but since he doesn’t blog, and I want to be able to refer to it later, I thought I’d write it up here.
In many games, such as Magic: The Gathering, Hearthstone, or Dungeons and Dragons, there’s a two-phase process. First, the player constructs a deck or character from a very large sample space of possibilities. This is a particular combination of strengths and weaknesses and capabilities for action, which the player thinks can be successful against other decks/characters or at winning in the game universe.
The choice of character often determines the strategies that character can use in the second phase, which is actual gameplay. In gameplay, the character can only use the affordances that it’s been previously set up with.
This means that there are two separate places where a player needs to get things right: first, in designing a strong character/deck, and second, in executing the optimal strategies for that character/deck during gameplay.
(This is in contrast to games like chess or go, which are single-level; the capacities of black and white are set by the rules of the game, and the only problem is how to execute the optimal strategy. Obviously, even single-level games can already be complex!)
The idea is that human behavior works very much like a two-level game.
The “player” is the whole mind, choosing subconscious strategies. The “elephant“, not the “rider.” The player is very influenced by evolutionary pressure; it is built to direct behavior in ways that increase inclusive fitness. The player directs what we perceive, do, think, and feel.
The player creates what we experience as “personality”, fairly early in life; it notices what strategies and skills work for us and invests in those at the expense of others. It builds our “character sheet”, so to speak.
Note that even things that seem like “innate” talents, like the savant skills or hyperacute senses sometimes observed in autistic people, can be observed to be tightly linked to feedback loops in early childhood. In other words, savants practice the thing they like and are good at, and gain “superhuman” skill at it. They “practice” along a faster and more hyperspecialized path than what we think of as a neurotypical “practicing hard,” but it’s still a learning process. Savant skills are more _rigidly fixed and seemingly “automatic” than non-savant skills, but they still change over time — e.g. Stephen Wiltshire, a savant artist who manifested an ability to draw hyper-accurate perspective drawings in early childhood, has changed and adapted his art style as he grew up, and even acquired _new savant talents in music. If even savant talents are subject to learning and incentives/rewards, certainly ordinary strengths, weaknesses, and personality types are likely to be “strategic” or “evolved” in this sense.
The player determines what we find rewarding or unrewarding. The player determines what we notice and what we overlook; things come to our attention if it suits the player’s strategy, and not otherwise. The player gives us emotions when it’s strategic to do so. The player sets up our subconscious evaluations of what is good for us and bad for us, which we experience as “liking” or “disliking.”
The character is what executing the player’s strategies feels like from the inside. If the player has decided that a task is unimportant, the character will experience “forgetting” to do it. If the player has decided that alliance with someone will be in our interests, the character will experience “liking” that person. Sometimes the player will notice and seize opportunities in a very strategic way that feels to the character like “being lucky” or “being in the right place at the right time.”
This is where confusion often sets in. People will often protest “but I did_care about that thing, I just forgot” or “but I’m _not that Machiavellian, I’m just doing what comes naturally.” This is true, because when we talk about ourselves and our experiences, we’re speaking “in character”, as our character. The strategy is not going on at a conscious level. In fact, I don’t believe we (characters) have direct access to the player; we can only infer what it’s doing, based on what patterns of behavior (or thought or emotion or perception) we observe in ourselves and others.
Evolutionary psychology refers to the player’s strategy, not the character’s. (It’s unclear which animals even have characters in the way we do; some animals’ behavior may all be “subconscious”.) So when someone speaking in an evolutionary-psychology mode says that babies are manipulating their parents to not have more children, for instance, that obviously doesn’t mean that my baby is a cynically manipulative evil genius. To him, it probably just feels like “I want to nurse at night. I miss Mama.” It’s perfectly innocent. But of course, this has the effect that I can’t have more children until I wean him, and that’s to his interest (or, at least, it was in the ancestral environment when food was more scarce.)
Szaszian or evolutionary analysis of mental illness is absurd if you think of it as applying to the character — of course nobody wakes up in the morning and decides to have a mental illness. It’s not “strategic” in that sense. (If it were, we wouldn’t call it mental illness, we’d call it feigning.) But at the player level, it can be fruitful to ask “what strategy could this behavior be serving the person?” or “what experiences could have made this behavior adaptive at one point in time?” or “what incentives are shaping this behavior?” (And, of course, externally visible “behavior” isn’t the only thing the player produces: thoughts, feelings, and perceptions are also produced by the brain.)
It may make more sense to frame it as “what strategy is your brain _executing?” rather than “what strategy are _you executing?” since people generally identify as their characters, not their players.
Now, let’s talk morality.
Our intuitions about praise and blame are driven by moral sentiments. We have emotional responses of sympathy and antipathy, towards behavior of which we approve and disapprove. These are driven by the player, which creates incentives and strategic behavior patterns for our characters to play out in everyday life. The character engages in coalition-building with other characters, forms and breaks alliances with other characters, honors and shames characters according to their behavior, signals to other characters, etc.
When we, speaking as our characters, say “that person is good” or “that person is bad”, we are making one move in an overall strategy that our players have created. That strategy is the determination of when, in general, we will call things or people “good” or “bad.
This is precisely what Nietzsche meant by “beyond good and evil.” Our notions of “good” and “evil” are character-level notions, encoded by our players.
Imagine that somewhere in our brains, the player has drawn two cartoons, marked “hero” and “villain”, that we consult whenever we want to check whether to call another person “good” or “evil.” (That’s an oversimplification, of course, it’s just for illustrative purposes.) Now, is the choice of cartoons itself good or evil? Well, the character checks… “Ok, is it more like the hero cartoon or the villain cartoon?” The answer is “ummmm….type error.”
The player is not like a hero or a villain. It is not like a person at all, in the usual (character-level) sense. Characters have feelings! Players don’t have feelings; they are beings of pure strategy that create feelings. Characters can have virtues or vices! Players don’t; they create _virtues or vices, strategically, when they build the “character sheet” of a character’s skills and motivations. Characters can be _evaluated according to moral standards; players set those moral standards. Players, compared to we characters, are hyperintelligent Lovecraftian creatures that we cannot relate to socially. They are beyond good and evil.
However! There is another, very different sense in which players _can _be evaluated as “moral agents”, even though our moral sentiments don’t apply to them.
We can observe what various game-theoretic strategies do and how they perform. Some, like “tit for tat”, perform well on the whole. Tit-for-tat-playing agents cooperate with each other. They can survive pretty well even if there are different kinds of agents in the population; and a population composed entirely of tit-for-tat-ers is stable and well-off.
While we can’t call cellular automata performing game strategies “good guys” or “bad guys” in a sentimental or socially-judgmental way (they’re not people), we can totally make objective claims about which strategies dominate others, or how strategies interact with one another. This is an empirical and theoretical field of science.
And there is a kind of “”morality”” which I almost hesitate to call morality because it isn’t very much like social-sentiment-morality at all, but which is very important, which simply distinguishes the performance of different strategies. Not “like the hero cartoon” or “like the villain cartoon”, but “win” and “lose.”
At this level you can say “look, objectively, people who set up their tables of values _in this way, calling X good and Y evil, _are gonna die.” Or “this strategy is conducting a campaign of unsustainable exploitation, which will work well in the short run, but will flame out when it runs out of resources, and so it’s gonna die.” Or “this strategy is going to lose to that strategy.” Or “this strategy is fine in the best-case scenario, but it’s not robust to noise, and if there are any negative shocks to the system, it’s going to result in everybody dying.”
“But what if a losing strategy is good?” Well, if you are in that value system, of course you’ll say it’s good. Also, you will lose.
Mother Teresa is a saint, in the literal sense: she was canonized by the Roman Catholic Church. Also, she provided poor medical care for the sick and destitute — unsterilized needles, no pain relief, conditions in which tuberculosis could and did spread. Was she a good person? It depends on your value system, and, obviously, according to some value systems she was. But, it seems, that a population that places Mother Teresa as its ideal (relative to, say, Florence Nightingale) will be a population with more deaths from illness, not fewer, and more pain, not less. A strategy that says “showing care for the dying is better than promoting health” will lose to one that actually can reward actions that promote health. (To be fair, for most of human history we didn’t have ways to heal the sick that were clearly better than Mother Teresa’s, and even today we don’t have credit-allocation systems that reliably reward the things that keep people alive and healthy; it would be wrong to dump on Catholicism too much here.) That’s the “player-level” analysis of the situation.
Some game-theoretic strategies (what Nietzsche would call “tables of values”) are more survival-promoting than others. That’s the sense in which you can get from “is” to “ought.” The Golden Rule (Hillel’s, Jesus’s, Confucius’s, etc) is a “law” of game theory, in the sense that it is a universal, abstract fact, which even a Lovecraftian alien intelligence would recognize, that it’s an effective strategy, which is why it keeps being rediscovered around the world.
But you can’t adjudicate between character strategies just by being a character playing your strategy. For instance, a Democrat usually can’t convert a Republican just by being a Democrat at him. To change a player’s strategy is more like “getting the bodymind to change its fundamental assessments of what is in its best interests.” Which can happen, and can happen deliberately and with the guidance of the intellect! But not without some…what you might call, wiggling things around.
The way I think the intellect plays into “metaprogramming” the player is indirect; you can infer _what the player is doing, do some formal analysis about how that will play out, comprehend (again at the “merely” intellectual level) if there’s an error or something that’s no longer relevant/adaptive, plug that new understanding into _some change that the intellect can affect (maybe “let’s try this experiment”), and maybe somewhere down the chain of causality the “player”‘s strategy changes. (Exposure therapy is a simple example, probably much simpler than most: add some experiences of the thing not being dangerous and the player determines it really isn’t dangerous and stops generating fear emotions.)
You don’t get changes in player strategies just by executing social praise/blame algorithms though; those algorithms are for interacting with other characters. Metaprogramming is… I want to say “cold” or “nonjudgmental” or “asocial” but none of those words are quite right, because they describe character traits or personalities or mental states and it’s not a character-level thing at all. It’s a thing Lovecraftian intelligences can do to themselves, in their peculiar tentacled way.