Police Shootings, The Causes of Terrorism, and Other Impasses: We Have the Same Data. Why Can’t We Agree?

Recent tragedies — deaths of black men from what looked to be excessive force by some police officers, retaliatory killings of randomly selected officers, and massive civilian terror attacks worldwide — have brought tense questions back to the top of the news rundown. Are some active police officers unfairly prejudiced against black people? What fuels terrorism? Some controversies are matters of preference. But these questions, though they stir strong emotion, are ultimately questions of fact. A person with perfect knowledge could settle these debates.

In this sense, these questions are similar to other societal impasses, like the causes and consequences of climate change or the effect of raising the minimum wage on unemployment. To each of these questions, objectively correct answers exist.

I will not attempt to add value to the particulars of these individual discussions. I will focus instead on the general issue of societal stalemate: how is it that we disagree so permanently over seemingly solvable, empirical questions? Our information and intelligence are certainly imperfect, and we have different life experiences. But I see three fixable problems that keep us from changing our minds as evidence evolves, as well. (My thinking can surely be improved, so please add your comments.) Here they are:

1. Change-Resistant Thinking (as opposed to Fluid Thinking)

Imagine you are evaluating two competing beliefs, Belief A and Belief B. How do you decide between them? One approach would be to begin with the presumption that Belief A is true, and only change your mind if shown overwhelming evidence in favor of Belief B. I call this change-resistant thinking. Change-resistant thinking sounds ridiculous when described in this way, but we formally and purposefully apply this framework in many contexts.

We apply change-resistant thinking in criminal trials. It is the presumption of innocence. We also codify change-resistant thinking into science. When testing the effectiveness of a new drug, the default presumption or “null hypothesis” is that the drug is no better than preexisting treatments. Only with overwhelming evidence for the drug’s superiority will we reject our initial position and conclude in favor of the new drug. Even if experimental results suggest a 70% or 80% likelihood that the new drug is superior, our default presumption that the drug is no better than alternatives can remain intact. Often a confidence level greater than 95% is required to overturn the null hypothesis in scientific experiments.

The graph below shows the results of change-resistant thinking. As long as the data admit any degree of uncertainty, a person who begins with a commitment to Belief A can review the data and keep her view, as can a person who began with Belief B. Same data, opposite conclusions.

Change-Resistant Thinking

Change-resistant thinking
Art credit: My wife, McCaughan Andrukonis

Why is change-resistant thinking appropriate in some contexts? In the criminal justice system, we deem it worse to wrongly convict than to wrongly acquit. In the lab, the environment is so well controlled that any true effect should be definitively observable over a large enough sample. In the chaos of the real world, however, one effect is always entangled with many other contemporaneous and offsetting effects.

Which brings us to the other model of evaluating competing claims or theories, fluid thinking. In a fluid thinking model, we begin from a place of agnosticism or equal consideration between A and B. We move between the beliefs in the direction the evidence indicates, and to the degree that the evidence indicates. If the accumulated body of evidence suggests a 60% probability that Belief B is true, a fluid thinker will lean toward Belief B with 60% confidence, while allowing a 40% chance that she is wrong. Fluid thinking is the decision-making framework we apply in civil trials.

Fluid Thinking

Fluid thinking

When facing mutually exclusive propositions, the first choice is to decide how we will decide: will we apply a change-resistant model or a fluid model? If our goal is simply to align our beliefs with what the evidence indicates, the fluid-thinking framework serves us best.

If we do decide to apply change-resistant thinking, we should pre-determine the standard of proof we will require in order to reject our default position. I mentioned that scientific researchers often require a 95% confidence level before rejecting their default presumption. My sense is that people often require this type of certainty or even greater before reluctantly revising opinions. We should ask ourselves whether the standard of proof we require of some unfamiliar claim is the same standard we required when we initially accepted our default beliefs.

2. Information Processing Biases

Change-resistant thinking would entrench our ideology even if we were entirely rational in how we processed new information. In reality, we tend to scramble information in order to avoid the mental discomfort of having our preexisting worldview challenged.

A broad term for these biases is motivated reasoning — the use of our brains in service of our emotions. Studies of brain activity show motivated reasoning to be prevalent across the political spectrum. When individuals are given a reasoning task where they have no emotional stake in the conclusion reached, neuroimaging shows that they use different brain regions than when processing information about a political candidate toward whom they have strong feelings. The upshot is that individuals converge on judgments that that minimize negative emotion and maximize positive emotion. Some interesting, specific biases include:

Ad Hoc Hypotheses

An ad hoc hypothesis is a hypothesis “added to a theory to save it from being falsified” by seemingly contradictory evidence. When researchers “adjust for” or “control for” certain factors or “exclude” outliers from their study results, be wary. If the researcher made such adjustments only after being disappointed by the unadjusted study results, these are ad hoc hypotheses. In an Econ Talk episode noting that many psychological effects reported in published research can’t be reliably reproduced, economist Russ Roberts and psychology professor Brian Nosek discuss this concept without using the term. Consider this illustration:

A psychologist hypothesizes that children of divorced parents are less likely to want to marry than children of married parents. She surveys a sample of single adults. To her surprise, 85% of the divorced-parents group say they want to marry someday, while just 70% of the married-parents cohort say the same. She brainstorms explanations for this counterintuitive result. She observes that the married-parents group contains a higher concentration of young adults, a demographic less interested in marriage. She decides to exclude this demographic from her data. With this adjustment, the results are closer to her expectation: 90% of the divorced-parents group now say they want to marry versus 85% of the married-parents group. Still seeking to close the gap, she sees that some of the married-parents respondents noted that their parents fought a lot. She deems these “unhappy marriages,” and reclassifies them into the divorced-parents group. She tabulates the results of the happily-married-parents group versus the divorced-and-unhappily-married parents group. Finally, she gets the result she expected: 90% of the happily-married-parents group want to marry, versus just 75% of the other group. She considers no further adjustments, and declares that her data support her theory. Had she continued to scan the variables, she might have noticed that the happily-married-parents group contained a different concentration of men versus women, or religious backgrounds that emphasize marriage, or other factors that might have swung the outcome.

The point is not that a researcher should not explore the data to refine a theory, but that the data used to generate a theory must be separate from the data used to confirm it. Otherwise, a researcher can keep shaking the dice — adjusting and unadjusting the sample for every plausibly explanatory factor — until the data appear to show the desired result.

Special Pleading

Special Pleading refers to asymmetric scrutiny in search of a particular outcome. In the example of ad hoc hypotheses above, the reclassification of unhappy marriages into the divorce group was special pleading; there was no comparable consideration given to reclassifying any other data.

Cherry Picking

Cherry picking means paying or calling attention to data that support one’s view while consciously or subconsciously avoiding data that conflict. It is a piece of confirmation bias, which I mentioned in an earlier post, Which News Sources Should You Follow?

Overweighting Personal Experience

People tend to overweight their personal experience when estimating probabilities. We do not need to, and should not, consider the personal experiences of the few people we happen to know when evaluating a proposition on which large-scale data exists. Data from a proper, representative sample already reflects the kinds of outcomes of which you are aware, in their proper proportion to all other outcomes which have been observed.

3. Prior Assumptions

Prior assumptions can cause two people to interpret the same statistic to mean opposite things. Consider metrics showing that people of some ethnic groups are more likely to be killed by police than others.

To someone who considers it unlikely that police are racially biased, the only thing these statistics can mean is that police encounter threats warranting lethal force in different frequencies from people of different ethnicities. To someone who believes that people of all ethnicities are equally likely to present police with such threats, the only thing these statistics can mean is that police are racially biased.

In a complicated world, where we cannot isolate variables as in a laboratory, we need to identify and evaluate our own prior assumptions.

What Can Be Done?

The world is not a controlled laboratory. If we are holding out for perfect, absolutely indisputable evidence, we will likely never change our mind about anything. This does not mean that we should quit searching for objective truth, or that both parties to a debate always have equal evidence on their side. We can maximize our chances of seeing things as they are by committing to a fluid-thinking model, resisting information processing biases, and reevaluating our prior assumptions.

Leave a Reply

Your email address will not be published. Required fields are marked *