Asking Permission
Compliance Costs
Governance is about setting policies; rules for what people can do, who has authority to make decisions, what procedures must be used for decisionmaking. Governance encompasses both what governments do and what non-governmental bodies do — businesses, voluntary or charitable organizations, online discussion spaces. Anything that has written policies needs to think about governance and how to do it well –and a lot of groups do governance badly because they don’t know they’re doing it.
In this post I want to talk about rules, permissions, and what kinds of considerations should be kept in mind when requiring people to ask for permission before doing things.
Rules, of course, are often necessary, but always impose some difficulty or inconvenience on those who have to follow them.
The burden of a rule can be separated into (at least) two components.
First, there’s the direct opportunity cost of not being allowed to do the things the rule forbids. (We can include here the penalties for violating the rule.)
Second, there’s the “cost of compliance”, the effort spent on finding out what is permitted vs. forbidden and demonstrating that you are only doing the permitted things.
Separating these is useful. You can, at least in principle, aim to reduce the compliance costs of a rule without making it less stringent.
For instance, you could aim to simplify the documentation requirements for environmental impact assessments, without relaxing standards for pollution or safety. “Streamlining” or “simplifying” regulations aims to reduce compliance costs, without necessarily lowering standards or softening penalties.
If your goal in making a rule is to avoid or reduce some unwanted behavior — for instance, to reduce the amount of toxic pollution people are exposed to — then shifting up or down your pollution standards is a zero-sum tradeoff between your environmental goals and the convenience of manufacturers who produce pollution. Looser rules are worse for environmental safety but better for polluters; tighter rules are better for environmental safety but worse for polluters. It’s a direct tug-of-war between opposed interests.
Reducing the costs of compliance, on the other hand, is positive-sum: it saves money for manufacturers, without increasing pollution levels. Everybody wins. Where possible, you’d intuitively think rulemakers would always want to do this.
There may be fundamental limits on how much you can streamline the process of demonstrating compliance, but there are totally rules in our world that could be made easier to comply with. For instance, why don’t you report your income to the government and have them automatically calculate your income taxes? This is technically feasible; they do it in European countries; and it certainly doesn’t need to reduce the amount of tax revenue the government collects. Everyone would save time “doing their taxes” — why isn’t this a win-win?
Of course, this assumes an idealized world where the only goal of a rule is to get as many people as possible to comply as fully as possible.
You might want compliance costs to be high if you’re using the rule, not to reduce incidence of the forbidden behavior, but to produce distinctions between people — i.e. to separate the extremely committed from the casual, so you can reward them relative to others. Costly signals are good if you’re playing a competitive zero-sum game; they induce variance because not everyone is able or willing to pay the cost.
For instance, some theories of sexual selection (such as the handicap principle) argue that we evolved traits which are not beneficial in themselves but are _sensitive indicators _of whether or not we have other fitness-enhancing traits. E.g. a peacock’s tail is so heavy and showy that only the strongest and healthiest and best-fed birds can afford to maintain it. The tail magnifies variance, making it easier for peahens to distinguish otherwise small variations in the health of potential mates.
Such “magnifying glasses for small flaws” are useful in situations where you need to pick “winners” and can inherently only choose a few. Sexual selection is an example of such a a situation, as females have biological limits on how many children they can bear per lifetime; there is a fixed number of males they can reproduce with. So it’s a zero-sum situation, as males are competing for a fixed number of breeding slots. Other competitions for fixed prizes are similar in structure, and likewise tend to evolve expensive signals of commitment or quality. A test that’s so easy anyone can pass it, is useless for identifying the top 1%.
On a regulatory-capture or spoils-based account of politics, where politics (including regulation) is seen as a negotiation to divide up a fixed pool of resources, and loyalty/trust is important in repeated negotiations, high compliance costs are easy to explain. They prevent diluting the spoils among too many people, and create variance in people’s ability to comply, which allows you to be selective along whatever dimension you care about.
Competitive (selective, zero-sum) processes work better when there’s wide variance among people. A rule (or boundary, or incentive) that’s meant to optimize collective behavior, is, by contrast, looking at aggregate outcomes, and will tend to want to reduce variance between people.
If you can make it easier for people to do the desired behavior and refrain from the undesired, you’ll get better aggregate behavior, all else being equal. These aggregate goals are, in a sense, “democratic” or “anti-elitist”; if you care about encouraging good behavior in everyone, then you want good behavior to be broadly accessible.
**Requiring Permission Raises Compliance Costs **
A straightforward way of avoiding undesired behavior is to require people to ask an authority’s permission before acting.
This has advantages: sometimes “undesired behavior” is a complex, situational thing that’s hard to codify into a rule, so the discretional judgment of a human can do better than a rigid rule.
One disadvantage that I think people underestimate, however, is the chilling effect it has on desired behavior.
For instance:
- If you have to ask the boss’s permission individually for each purchase, no matter how cheap, not only will you waste a lot of your employees’ time, but you’ll disincentivize them from asking for even _cost-effective _ purchases, which can be more costly in the long run.
- If you require a doctor’s appointment for giving pain medication every time, to guard against drug abuse, you’re going to see a lot of people who really do have chronic pain doing without medication because they don’t want the anxiety of going to a doctor and being suspected of “drug-seeking”.
- If you have to get permission before cleaning or contributing supplies for a shared space, then that space will be chronically under-cleaned and under-supplied.
- If you have to get permission from a superior in order to stop the production line to fix a problem, then safety risks and defective products will get overlooked. (This is why Toyota mandated that any worker can _unilaterally _stop the production line.)
The inhibition against asking for permission is going to be strongest for shy people who “don’t want to be a bother” — i.e. those who are most _conscious of the effects of their actions on others, and perhaps those who you’d most want to _encourage to act. Those who don’t care about bothering you are going to be undaunted, and will flood you with unreasonable requests. A system where you have to ask a human’s permission before doing anything is an asshole filter, in Siderea’s terminology; it empowers assholes and disadvantages everyone else.
The direct costs of a rule fall only on those who violate it (or wish they could); the compliance costs fall on everyone. A system of enforcement that preferentially inhibits desired behavior (while not being that reliable in restricting undesired behavior) is even worse from an efficiency perspective than a high compliance cost on everyone.
Impersonal Boundaries
An alternative is to instantiate your boundaries in _ an inanimate object_ — something that can’t intimidate shy people or cave to pressure from entitled jerks. For instance:
- a lock on a door is an inanimate boundary on space
- a set of password-protected permissions on a filesystem is an inanimate boundary on information access
- a departmental budget and a credit card with a fixed spending limit is an inanimate boundary on spending
- an electricity source that shuts off automatically when you don’t pay your bill is an inanimate boundary against theft
The key element here isn’t information-theoretic simplicity, as in the debate over simple rules vs. discretion. Inanimate boundaries can be complex and opaque. They can be a black box to the user.
The key elements are that, unlike humans, inanimate boundaries do not punish requests that are refused (even socially, by wearing a disappointed facial expression), and they do not give in to repeated or more forceful requests.
An inanimate boundary is, rather, like the ideal version of a human maintaining a boundary in an “assertive” fashion; it enforces the boundary reliably and patiently and without emotion.
This way, it produces less inhibition in shy or empathetic people (who hate to make requests that could make someone unhappy) and is less vulnerable to pushy people (who browbeat others into compromising on boundaries.)
In fact, you can get some of the benefits of an inanimate boundary without actually taking a human out of the loop, but just by reducing the bandwidth for social signals. By using email instead of in-person communication, for instance, or by using formalized scripts and impersonal terminology. Distancing tactics make it easier to refuse requests and easier to make requests; if these effects are roughly the same in magnitude, you get a system that selects more effectively for enabling desired behavior and preventing undesired behavior. (Of course, when you have one permission-granter and many permission-seekers, the effects are not the same in aggregate magnitude; the permission-granter can get spammed by tons of unreasonable requests.)
Of course, if you’re trying _to select for transgressiveness — if you _want_to reward people who are too savvy to follow the official rules and too stubborn to take no for an answer — you’d want to do the opposite; have an automated, impersonal filter to block or intimidate the dutiful, and an extremely personal, intimate, psychologically grueling test for the exceptional. But in this case, what you’ve set up is a competitive _test to differentiate between people, not a rule or boundary which you’d like followed as widely as possible.
Consensus and Do-Ocracy
So far, the systems we’ve talked about are centralized, and described from the perspective of an authority figure. Given that you, the authority, want to achieve some goal, how should you most effectively enforce or incentivize desired activity?
But, of course, that’s not the only perspective one might take. You could instead take the perspective that everybody _has goals, with no a priori reason to prefer one person’s goals to anyone else’s (without knowing what the goals are), and model the situation as a _group deliberating on how to make decisions.
Consensus represents the egalitarian-group version of permission-asking. Before an action is taken, the group must discuss it, and must agree (by majority vote, or unanimous consent, or some other aggregation mechanism) that it’s sufficiently widely accepted.
This has all of the typical flaws of asking permission from an authority figure, with the added problem that groups can take longer to come to consensus than a single authority takes to make a go/no-go decision. Consensus decision processes inhibit action.
(Of course, sometimes that’s exactly what you want. We have jury trials to prevent giving criminal penalties lightly or without deliberation.)
An alternative, equally egalitarian structure is what some hackerspaces call do-ocracy.
In a do-ocracy, everyone has authority to act, unilaterally. If you think something should be done, like rearranging the tables in a shared space, you do it. No need to ask for permission.
There might be disputes _when someone objects to your actions, which have to be resolved in some way. But this is basically the only situation where governance enters into a do-ocracy. Consensus decisionmaking is an informal, anarchic version of a legislative or executive body; do-ocracy is an informal, anarchic version of a _judicial system. Instead of needing governance every time someone acts, in a judicial-only system you only need governance every time someone acts (or states an intention to act) AND someone else objects_.
The primary advantage of do-ocracy is that it doesn’t slow down actions in the majority of cases where nobody minds. There’s no friction, no barrier to taking initiative. You don’t have tasks lying undone because nobody knows “whose job” they are. The compliance cost for unobjectionable actions is zero.
Additionally, it grants the most power to the most active participants, which intuitively has a kind of fairness to it, especially in voluntary clubs that have a lot of passive members who barely engage at all.
The disadvantages of do-ocracy are exactly the same as its advantages. First of all, any action which is potentially harmful and hard to reverse (including, of course, dangerous accidents and violence) can be unilaterally initiated, and do-ocracy cannot prevent it, only remediate it after the fact (or penalize the agent.) Do-ocracies don’t deal well with very severe, irreversible risks. When they have to, they evolve permission-based or rules-based functions; for instance, the safety policies that firms or insurance companies institute to prevent risky activities that could lead to lawsuits.
Secondly, do-ocracies grant the most power to the most active participants, which often means those who have the most time on their hands, or who are closest to the action, at the expense of absent stakeholders. This means, for instance, do-ocracy favors a firm’s executives (who engage in day-to-day activity) over investors or donors or the general public; in volunteer and political organizations it favors those who have more free time to participate (retirees, students, the unemployed, the independently wealthy) over those who have less (working adults, parents). If you’ve seen dysfunctional homeowners’ associations or PTAs, this is why: they’re run for the personal benefit of those who have nothing better to do than get super involved in the association’s politics, at the expense of the people who are too busy to show up.
The general phenomenon here is principal-agent problems — theft, self-dealing, negligence, all cases where the people who are physically there and acting _take unfair advantage of the people who are _ absent and not in the loop, but depend on things remaining okay.
A judicial system doesn’t help those who don’t _know _ they’ve been wronged.
Consensus systems, in fact, are designed to _force _ governance to include or represent all the stakeholders — even those who would, by default, not take the initiative to participate.
Consumer-product companies mostly have do-ocratic power over their users. It’s possible to quit Facebook, with the touch of a button. Facebook changes its algorithms, often in ways users don’t like — but, in most cases, people don’t hate the changes enough to quit. Facebook makes use of personal data — after putting up a dialog box requesting permission to use it. Yet, some people are dissatisfied and feel like Facebook is too powerful, like it’s hacking into their baser instincts, like this wasn’t what they’d wanted. But Facebook hasn’t harmed them in any way they didn’t, in a sense, consent to. The issue is that Facebook was doing things they didn’t reflectively approve of while they weren’t paying attention. Not secretly — none of this was secret, it just wasn’t on their minds, until suddenly a big media firestorm put it there.
You can get a lot of power to shape human behavior just by showing up, knowing what you want, and enacting it before anyone else has thought about it enough to object. That’s the side of do-ocracy that freaks people out. Wherever in your life you’re running on autopilot, an adversarial innovator can take a bite out of you and get away with it _long before you notice something’s wrong. _
This is another part of the appeal of permission-based systems, whether egalitarian or authoritarian; if you have to make a high-touch, human connection with me and get my permission before acting, I’m more likely to notice changes that are bad in ways I didn’t have any prior model of. If I’m sufficiently cautious or pessimistic, I might even be ok with the costs in terms of causing a chilling effect on harmless actions, so long as I make sure I’m sensitive to new kinds of shenanigans that can’t be captured in pre-existing rules. If I don’t know what I want exactly, but I expect change is bad, I’m going to be much more drawn to permssion-based systems than if I know exactly what I want or if I expect typical actions to be improvements.