Anomalies Wanted: Models must be accountable to reality

Today, I’m posting something I wrote earlier this week. This is an adaptation of work that my long-time collaborator and friend, Clayton Christensen, and I did together. (It was his birthday Monday–4/6–so this topic drifted to the top of my mind.) To be clear, this is not the original, but an adaptation of our joint work.

Anomalies Wanted: Models must be accountable to reality

In 1994, the US passed the Violent Crime Act. At the time, there was a belief, based on observational study, that longer penalties would deter violent crime. Since that time, the US has become the leader in incarceration rate in the world (655/ 100,000); the collective cost of incarceration has quadrupled (~$16B - ~$70B—direct expenditures for the states, does not include federal expenditures or costs of healthcare, courts/parole systems and increased policing.); and the payoff is modest. (See graph below. Source data are here: http://www.ucrdatatool.gov/Search/Crime/State/StatebyState.cfm)

text

As you can see from the graph, there are two immediate problems with the 1994 Violent Crime Act. First, the downward trend in violent crime predates the Act. Second, this was an incredibly expensive intervention–in both financial and human terms–given its modest outcomes.

The biggest problem, though, isn’t just that the study was wrong, but that it wasn’t accountable to updated information.

Models can end up in what I refer to as a modeling “cul de sac.” Teacher evaluations done by a number of companies, for example, are often “black boxes” that attempt to estimate the quality of teachers mostly by evaluating the before and after pictures of students in their classes, using “objective” measures like standardized tests.

Among the many problems with most of these evaluations is that it’s not possible to see if they are wrong. Many teachers receive wildly different scores from year to year. Is it really possible that a teacher’s performance is that variable? Of course not. But this is ignored because the models are developed and updated without an external check on their soundness.

The reason you and I—though we may complain periodically—rely on weather forecasts—often in 15-minute increments—is because weather models have continued to improve, due to their unavoidable clash with reality. If today’s model predicted 57 degrees at 1000A, but it was 58 degrees, the model can be improved.

With credit to Karl Popper, this is the central insight of Clay’s and my work on this issue: anomalies for our models or theories are the most powerful engine for progress. What distinguishes science from superstition is that scientific claims can, in principle, be falsified.

Finding anomalies (and taking them seriously—that is, not casting them aside as “outliers”) for our statistical, intuitive or causal models also helps us to understand the conditions and boundaries of their application. Without the “failure” of the Michelson-Morley experiments, Einstein wouldn’t have worked on the “puzzles” about the speed of light. Without Brahe’s careful examination of the movement of planets, Kepler would never have scrapped “epicycles” and advanced the theory of elliptical orbits.

The lesson for all of us right now, thinking about models of the spread of nCoV-19, is that our mental models and the statistical models of what’s happening need to be susceptible to outcomes contrary to prediction. And we should treat those anomalies not as unwelcome intruders, but as treasures that can drive better predictions and deeper insights.

Anomalies Wanted: Models must be accountable to reality and our other posts are constructed based on the principles we teach in our live, online Innovation Science Bootcamp.

Not sure yet? Bring a business problem to our Free 1:1 Clinics, and see for yourself.

Free Clinics