We all say we admire honesty. But whenever we actually have to choose, we pick the person who tells us what we want to hear. In economic policy, especially, that means that we can get hurt, when we pick promises over prudence.
Macroeconomists need to imagine what causes what, and then choose measures that allow those causal relations to be identified and compared. I am reminded of the famous Josh Billings quip: “I honestly beleave it iz better tew know nothing than two know what ain’t so.” (Spelling original. This quote is almost always misattributed, often to Mark Twain; that’s actually delightful, when you think about it).
This sentiment dates back at least as far as Plato’s Apology, where Socrates claimed he was wise because he believed that he knew nothing, while others are ignorant even of their own ignorance. But my own modest experience, both in academics and in politics (I have run for elective office several times now), indicates that the value judgment implied in the Billings aphorism is simply incorrect.
“Knowing things that just ain’t so” is the heart of the modeling enterprise. All models are false, by construction, in the sense that the model intentionally abstracts from a prohibitively complex reality to allow the representation of relationships, and the simulation of different policy alternatives, to determine which is “better” in a given situation. That’s fine as long as we realize that we are using a model, instead of believing the model and pretending we live inside the reality that it projects, where we have imagined that one variable “causes” another.
The problem is most acute when it is time to make policy recommendations, of course. Those of us who use models, but who know that the “predictions” of the models are likely misleading and the world is more complex, are at an extreme disadvantage. Consider two situations:
- Economist trying to get tenure in an academic job: Is it better to know we don’t actually know anything, or is it better firmly to believe things that “just ain’t so”? Those who recognize that our ability to forecast financial crises or fine-tune the business cycle are unlikely to be able to publish much in journals. But folks who make predictions to the second decimal place (“Growth will be 2.61 percent in the second quarter, according to the best estimates”) are published and promoted at much higher rates. It is far better, in fact, to know—or pretend to know—things that just ain’t so.
- Candidate seeking political office: Is it better to say that these problems are hard, and that most things that the government can do will likely make matters worse? Believe me, I’ve tried that, and it gets you quickly ignored. The way to win public office is to pretend that all problems are simple, and all solutions are obvious. Sure, sometimes my solutions fail, but that’s because you people didn’t do enough of the things that I suggested. Next time, if we just run a bigger deficit, or impose much stricter regulations, things will be better. Vote for me!
The Dynamic Problem: It WAS So, But Now It Ain’t
Macroeconomist measure aggregate phenomena to discover what appears to be a regularity. But measures, such as “inflation,” do not have constant relations to other aggregates, such as “unemployment.” Knowing what ain’t so really can hurt us.
The most famous theory connecting “the” inflation rate and “the” unemployment rate is called the Phillips Curve. (I’m using quotes around “the” because prices differ across sectors, and unemployment differs across sectors, so measuring these by a single rate is dumb, but that’s what the models do). The Phillips Curve is an empirically observed relation between changes in the price level and changes in how many people are employed; so far, so good.
But the empirical correlation morphed into a policy tool that could be used to fine-tune the economy. If the unemployment rate is “too high,” we can increase employment by expanding the money supply and raising the price level. Or if the price level is growing too rapidly, we can “cool off” the economy at the cost of some temporary increases in unemployment. “Knowing” about the Phillips Curve helped two generations of economists get tenure and helped two generations of Progressive politicians sound like they were smart and active and would make good elected leaders.
The problem is the general difficulty that was pointed out by Charles Goodhart, in a maxim that has come to be known as Goodhart’s Law: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.” When policy-makers attempted to use inflation to reduce unemployment in the 1970s, they were successful in increasing the price level, but unemployment stayed stubbornly high, leading to the “unexpected” result called “stagflation.” But it was really the result of widespread belief in something that just wasn’t so.
Examples abound. Suppose well-educated people do well on a certain kind of test. If we use that test to evaluate teachers, the resulting level of educational achievement falls rather than rises, because we are trying to exploit an observed statistical relation for control purposes. College rankings used a number of metrics that appeared related to the quality of the school, but when the colleges began focusing on those numbers instead of on the business of education the quality of instruction and achievement fell off quickly.
I am stumped by what a solution to this problem might look like. We ignore people who (rightly) point out that simple solutions to political and economic problems make things worse, not better. We vote for, and reward, charlatans who pretend to know the answers, and zealots who actually believe their own superficial galimatias. Ultimately, it’s a collective action problem: it would be better for society if our leaders were humble and honest about how little they actually know. But it’s better for the candidates for leadership if they pretend to be committed to a whole dog’s breakfast of truths that just ain’t so.