Do The Best Thing
A Googler friend of mine once asked me, “If you had a program that was running slow, what would you do to fix it?”
I said, “Run it through a profiler, see which step was slowest, and optimize that step.”
He said “Yeah, that’s the kind of thinking style Google optimizes for in hiring. I’m the odd one out because I _don’t _think that way.”
“That way” of thinking is a straightforward, naive, and surprisingly powerful mindset. Make a list of all your problems, and try to fix the biggest tractable one. Sure, there are going to be cases when that’s not the best possible solution — maybe the slowest step can’t be optimized very much as is, but if you rearrange the entire program, or obviate the need for it in the first place, your problem would be solved. But if you imagine a company of people who are all drilled in fix the biggest problem first, that company would have a systematic advantage over a company full of people who poke at the code at random, or according to their varied personal philosophies, or not at all. Just doing the naively best thing is a powerful tool; enough that it’s standard operating procedure in a company reputed to have the best software engineers in the world.
There are other heuristics that have a similar spirit.
Making a list of pros and cons, a decision procedure started by Ben Franklin and validated by Gerd Gigenzehrer’s experiments, is an example of “do the best thing” thinking. You make your considerations explicit, and then you _score _them, and then you decide.
Double-entry bookkeeping, which was arguably responsible for the birth of modern capitalism, is a similar innovation; you simply keep track of expenses and revenues, and aim to reduce the former and increase the latter. It sounds like an obvious thing to do; but _reliably _tracking profits and losses means that you can allocate resources to the activities that produce the highest profits. For the first time you have the _technology _to become a profit-maximizer.
The modern craze of fitness-tracking is a “do the best thing” heuristic; you pick a naive metric, like “calories eaten – calories burned”, and keep track of it, and try to push it in the desired direction. It’s crude, but it’s often a lot more effective than people’s default behavior for achieving goals — people who self-monitor diet and weight are more likely to achieve long-term deliberate weight loss.
Deciding to give to the charity that saves the most lives per dollar is another example of “do the best thing” — you pick a reasonable-sounding ranking criterion, like cost-effectiveness, and pick things at the top of the list.
Notice that I’m not calling this optimization, _even though that’s what it’s often called in casual language. Optimization, in the mathematical sense, is about _algorithms for maximizing some quantity. DTBT isn’t an algorithm, it’s what comes before implementing an algorithm. It’s just “pick an obvious-seeming measure of importance, and then prioritize by that.” The “algorithm” may be trivial — just “sort by this metric”. The characteristic quality is picking a metric and tracking it; and, in particular, picking an obvious, straightforward, reasonable-sounding metric.
Now, there are critics of the DTBT heuristic. “Optimize Everything” can evoke, to some people, a dystopian future of robotic thinking and cold indifference to human values. “Minimize calories”, taken in isolation, is obviously a flawed approach to health. “Maximize GDP growth” is obviously an imperfect approach to economic policy. One can be very skeptical of DTBT because of the complicated values that are being erased by a simple, natural-seeming policy. This skepticism is present in debates over legibility. I suspect that some Marxist critiques of “neoliberalism” are partly pointing at the fact that a _measure _of goodness (like “GDP growth” or even “number of people no longer living in poverty”) is not _identical with _goodness as judged by humans, even though it’s often treated as though it obviously is.
The DTBT response is “Yeah, sure, simplifications simplify. Some simplifications oversimplify to the point of being counterproductive, but a lot of them are clearly productive. What people were doing before we systematized and improved processes was a lot of random and counterproductive cruft, not deep ancestral wisdom. Ok, Westerners undervalued traditional societies’ agriculture techniques because they were racist; that’s an admitted failure. Communists didn’t understand economics; that’s another failure. Nobody said that it’s impossible to be wrong about the world. But use your common sense — people shoot themselves in the foot through procrastination and weakness of will and cognitive bias and subconscious self-sabotage all the time. Organizations are frequently disorganized and incompetent and just need some serious housecleaning. Do you seriously believe it’s never possible to just straighten things up?”
Here’s another way of looking at things. Behaviors are explained by a multitude of causes. Some of those causes are unendorsed. You don’t, for example, usually consider “I got a bribe” as a good reason to fund a government program. DTBT is about picking a _straightforwardly endorsable cause _and making it master. This applies both intrapersonally and interpersonally. “Optimizing for a personal goal” means taking one of the drivers of your behavior (the goal) and setting it over your other internal motivations. “Optimizing for a social outcome” means setting the outcome above all the motivations of the individual people who make up your plan.
In some cases, you can reduce the conflict _between the “master goal” and the sub-agents’ goals. Popular vote is one way of doing this: the “master goal” (the choice that gets the most votes) minimizes the sum of differences between the chosen outcome and the preferences of each voter. Free trade is another example: in a model where all the agents have conventional utility functions, permitting all mutually-beneficial trades between individuals maximizes the sum of individual utilities. If your “master goal” is arbitrary, you can cause a lot of pain for sub-agents. (E.g.: governments that tried to ‘settle’ nomadic peoples did not treat the nomadic peoples very well.) If your “master goal” is _universal, in some sense, if it includes everybody _ or _ lets everybody choose, then you can minimize total frustration.
Of course, this isn’t an objectively universal_ _solution to the problem — some people might say “my frustration inherently matters more than his frustration” or “you aren’t properly measuring the whole of my frustration.”
Another way to reduce conflict is to see if there are any illusory conflicts that disappear upon greater communication. This is what “dialogue” and “the public sphere” and “town halls” are all about. It’s what circling is about. It’s what IFS is about. (And, more generally, conflict resolution and psychotherapy.)
And, of course, once again, this isn’t an objectively universal solution to the problem — there might actually be irreconcilable differences.
The pure antithesis of DTBT would be wu-wei — don’t _try _to do anything, everything is already fine as it is, because it is caused by human motivations, and all human motivations are legitimate. It would be “conservative” in a way that political conservatives would hate: if the world is going to hell in a handbasket, _let it, _because that’s clearly what people want and it would be arrogant to suppose that you know better.
This extreme is obviously at least as absurd as the DTBT extreme of “All the world’s problems would be solved if people would just stop being idiots and just do the best thing.”
It seems more productive to resolve conflicts by the kinds of “universalizing” or “discourse” moves described above. In particular, to try to discuss _which kinds _of motivations are endorsable, and argue for them.
One example of this kind of argument is “No, we can’t use the CEO’s new “optimized” procedure, because it _wouldn’t work _in our department; here’s where it would break.” Sheer impracticality is pretty much always considered a licit reason not to do something among reasonable people, so a reasonable CEO should listen to this criticism.
Another, more meta example is discussing the merits of a particular kind of motivation. Some people think status anxiety is a legitimate reason to push for egalitarian policies; some don’t. You can argue about the legitimacy of this reason by citing some other shared value — “people with lower relative status are less healthy” appeals to concerns about harm, while “envy is an ugly motivation that prompts destructive behavior” appeals to concerns about harm and virtue.
Geeks are often accused of oversimplifying human behavior along DTBT lines. “Why can’t we just use ‘ask culture’ instead of ‘guess culture’?” “Why can’t we just get rid of small talk?” “Why do people do so much tribal signaling?” Well, because what most people want out of their social interactions is more complicated than a naive view of optimality and involves a lot of subconscious drives and hard-to-articulate desires. What it comes down to is actually a debate about what motivations are endorsable. Maybe some drives, like the desire to feel superior to others, are ugly and illegitimate and should be bulldozed by a simple policy that doesn’t allow people to satisfy them. Or maybe those drives are normal and healthy and a person wouldn’t be quite human without them. Which drives are shallow, petty nonsense and which are valuable parts of our common humanity? That’s the real issue that gets hidden under debates about optimality.
I happen to lean more DTBT than most people, and it’s because I’m fairly Blue in a spiral dynamics sense. While the stereotypical Blue is a rigid, conformist religious extremist, the fundamental viewpoint underlying it is the more general notion of “loyalty to Truth” — there are good things and bad things, and one should prefer the good to the bad and not swerve from it. “I have set before you life and death, blessing and cursing: therefore choose life.” From a Blue perspective, some motivations are much, much more legitimate than others, and one should sanction only the legitimate ones. A Blue who values intellectual inquiry doesn’t sanction “saving mental effort” as a valid reason to believe false things; a Blue who values justice doesn’t sanction “desire for approval” as a valid motivation to plagiarize. Some things are just bad and should feel bad. From a Blue perspective, people do “counterproductive” things all the time — choices that bring no _benefit, if the only benefits we count are the _licit _ones. (If you counted all motivations as legitimate, then no human behavior would be truly counterproductive, because it’s always motivated by _something.) And, so, from a Blue perspective, there are lots of opportunities to make the world “more optimal”, by subordinating illegitimate motivations to legitimate ones.
The best way to argue to me against some DTBT policy is to show it fails at some legitimate goal (is impractical, harms people, etc). A more challenging way is to argue that I ought to consider some motivation more legitimate than I do. For instance, sex-positive philosophy and evolutionary psychology can attempt to convey to a puritanical person that sexual motivations are legitimate and valid rather than despicable. A flat assertion that I ought _to value something I don’t is not going to work, but an attempt to _communicate the value might.
I think it would be better if we all moved beyond naive DTBT or simple critique of DTBT, and started trying to put into practice the kinds of dialogue that have a chance of resolving conflicts.