Teaching in the smartphone age means students are never more than a few seconds away from the dreaded “fact check.” In Econ 101, I frequently receive questions about studies showing that the minimum wage does not generate unemployment. Such studies have become something of a cottage industry since Card and Krueger (1994), a landmark study comparing Pennsylvania and New Jersey, purporting to demonstrate no harmful effects on employment from the minimum wage.
However, it would be short-sighted to conclude from such studies that the minimum wage generates no job loss. In some cases, even well-designed empirical studies can obscure the presence of disemploying effects.
I’m not against empirical studies—I think there should be more of them. But empirical work is complex, and it ought always to be guided by theory. I hope that economists think more carefully about how empirical work can fail to show job loss where it is, in fact, present. High-octane labor economists have been doing just that in recent years. David Neumark, William Wascher, and Jeffrey Clemens are exemplars.
Meanwhile, skilled economic communicators like Don Boudreaux, Robert Murphy, and Steven Landsburg, have been patiently dissecting every last argument for the minimum wage. They’ve been originating countless new thought experiments, analogies, and parables to convey the consequences of price floors. And in many cases, they’ve pointed out the flaws in empirical studies, but I haven’t seen a one-stop-shop where these problems are succinctly described.
Here are five common reasons why minimum wage studies might fail to find an employment effect.
1. Non-Wage Margins of Adjustment
Price controls can’t stipulate every aspect of an exchange. Usually, the only contractual term they alter is price. Market participants are free to change other margins of the exchange, and the disequilibrium created by a (binding) minimum wage gives them an incentive to do so.
Gordon Tullock offered the following famous example. Imagine factory workers on a hot summer day. The plant manager gets the bright idea of cutting costs by shutting off the AC. Before long, the workers begin complaining. If the owner wishes to retain these workers, he’ll likely respond by flipping the AC back on—he doesn’t want to lose these laborers to the employer across town who offers better working conditions.
So how does the minimum wage alter this calculus? If it’s binding, it transforms a situation of market-clearing, the process of moving towards an equilibrium of quantity supplied and quantity demanded, into one of surplus. And a labor market surplus shifts power from sellers (laborers) to buyers (employers). A surplus of labor means a buyer’s market. Employers can pick and choose, and their offer on margins other than wage needn’t be as attractive as it had been.
Now when the plant shuts off the AC and the workers begin complaining, the owner metaphorically responds: “You don’t like it here? Feel free to leave. There are a hundred other workers who will take your place tomorrow.” The labor supply curve is at work to the employer’s advantage. More workers are entering this labor market due to the minimum wage (what economists refer to as the “extensive margin”). Notice something else: It will be harder for workers to find alternative employment precisely because a labor surplus prevails. As a result, the workers are less likely to leave and seek another job.
It’s, therefore, possible that a.) the total number of laborers employed remains unchanged and b.) employers restore profitability by cutting their electricity bill. Again, enabled by the “power” the minimum wage affords these employers.
Since economics is about tracing out the consequences of actions as far as possible, let’s go one step further. Suppose this employer has turned off the AC and suppose that many other employers have followed his cost-cutting lead. Taken together, they reduce the demand for electricity. In turn, this dampens the demand for all the workers who produce electricity. If this change is large enough, it’s easy to imagine that some of them lose their jobs as electricity producers adjust their production processes. There it is—unemployment caused by the minimum wage, but in a way so indirect that empirical analysis will be helpless to detect it.
Where would an empirical researcher begin looking to find this sort of unemployment? After all, my example used electricity, but this need not be where the relevant adjustment occurs (perhaps the employers stopped providing coffee in the lounge, for instance).
There are a few other ways these sorts of non-wage adjustments commonly manifest in labor markets, however. The employer can demand more and/or better work to justify the higher wages he is paying. The logic for why he can get away with this is identical to the AC example above. Practically, this may take the form of the employer trying to cut down on shirking, which is a “perk” of virtually all jobs (you’re not literally “laboring” for eight straight, uninterrupted hours on a workday). Putting it that way helps bring the parallel with the AC example into focus. The employer can take away your AC or he can take away your ability to shirk or some combination thereof.
One reason it’s so important to emphasize these adjustments is that they’re hard to detect with any empirical technique. For starters, it’s impossible to know where to look. Some firms will adjust on one margin, others on another. For another, it’s impossible to predict the time frame over which these marginal adjustments occur.
In what sense can we call these adjustments “job loss” if they leave the total number of workers unchanged? Well, part of a worker’s compensation has been “lost,” so this constitutes job loss in that sense. Same work, less pay. Or, more work, same pay. See Jeremy Clemens for an extensive examination of these adjustments in the face of the minimum wage.
Let’s stick with the margins of adjustment for a while longer. When we show a supply and demand diagram for a labor market in Econ 101, it’s common for students and professors alike to assume that the x-axis depicts the “quantity of workers.” In a sense it does, but workers are more divisible than might appear at first glance. You can actually hire one-fourth of a worker—just hire her for twenty-five percent of the time you employed her before. The downward-sloping demand curve is best thought of as a “demand-for-labor-hours-per-unit-of-time” curve.
In other words, employers may adjust to a minimum wage by slashing employee hours, even without reducing the number of workers on the payroll. It’s common for economists to discuss how kiosks can substitute for low-skilled workers in the fast-food context. But notice how this point is consistent with a.) maintaining the size of a workforce and b.) simply having laborers work fewer hours. When companies install kiosks, they simply don’t need as many (human) hands on deck, at any given moment. Instead of the teenage, fast-food workers coming in every day, they might rotate, and each comes in every other day.
Steven Landsburg, in a comment here, notes something even subtler about underemployment. To get to the punchline, it’s possible that increasing the minimum wage can increase the total number of workers companies hire. This result is consistent with the law of demand because the higher minimum wage still decreases the total number of hours purchased.
Here’s how it works: Let’s suppose before the minimum wage a teenager works the store from 11 am to 7 pm. His busy hours are at noon and six. For the rest of the time, he mostly stares at his phone. With an increase in the minimum wage, the store owner no longer tolerates such shirking (see above). Rather than monitoring this worker (there’s not much for him to do anyway, so the benefits of monitoring are low), the owner simply closes the store during the slow hours. Finally, the owner rearranges his workforce a bit. He hires a worker for the noon hour and the six o’clock dinner hour—and he’s closed in between. More than likely, it’s not the same worker for both hours, so the total number of people he’s hired increases, while the total number of labor hours purchased falls. “Job loss,” of sorts.
Entrepreneurs, to be successful, must be forward-looking. They’re in the business of anticipating future states of affairs and arranging production in the present based on their forecasts. This point applies to unhampered markets every bit as much as it applies to forecasting how policy will impact profitability. For instance, in a free market, entrepreneurs must anticipate how suppliers’ cost changes will impact their own production processes, sometimes years down the road.
Adding intervention doesn’t change this basic point, it only makes things more complex for entrepreneurs. A minimum wage hike is an increase in a producer’s costs, no less than is the price of any other input rising. Businesses that anticipate minimum wage increases may prepare for them by shifting to kiosks or investing in other capital goods well in advance of the law going into effect.
Thus, so much depends on the timeframe over which a study examines employment changes. Once more, Card and Krueger (1994) is illustrative. Here, the authors measured unemployment just a few weeks before the hike went into effect. Yet, minimum wage increases are typically advertised by legislatures years before they become law. Not to mention the never-ceasing national debate that accompanies this issue. All firm owners are aware that future increases are a distinct possibility. In short, employment loss can come prior—even years prior—to the timeframe a paper examines.
4. The Second Law of Demand
On the flipside of anticipation, we ought also to think about the long-run—after a minimum wage hike becomes law. Not all adjustments must come prior to the minimum wage increase being enforced. Some of it may come after. But nothing in economic theory tells us how long this adjustment period is, and it will likely differ from industry to industry, and even firm to firm.
Armen Alchian emphasized what he called “The Second Law of Demand.” As time passes, the price elasticity of demand increases, other things equal. To my mind, this is more than an empirical observation. It’s rooted in reasoning about the costliness of finding substitutes. Determining which substitutes to use for labor, how to re-arrange production, and the like are entrepreneurial, trial-and-error decisions that take time. Even when the prices of consumer goods change, it can take buyers a period to discover suitable substitutes. Production, being more complex, usually takes longer.
A classic example comes from the American 1950s. There are two oft-cited occupations that were casualties of the 1950’s minimum wage increases. First, movie theater ushers—they’d escort patrons to their seats to help them avoid stumbling in the pitch-black darkness of their surroundings. And elevator operators, who turned hand cranks to take visitors to the floor they’d requested. Ushers and manual elevator operators were low-productivity workers, whose labor was no longer profitable in the wake of higher minimum wages.
Though these professions ring archaic to our ears now, these workers didn’t all lose their jobs the day the increase went into effect. It took time for innovators to devise capital goods substitutes (those colored light strips along the edge of the floor in theaters and automatic elevators) which ultimately replaced these workers.
There’s simply no way to know ahead of time how long such an adjustment process might take, which firms will be most affected, etc…This means it’s impossible to know how long after the legal change a study must look to capture all the resulting employment effects. But the longer that window becomes, the more chances there are for intervening events—which may change employment in either direction—to take place. The data get noisy.
When do legislatures pass minimum wage increases? Most likely, not in the troughs of a depression. At least some politicians know that the minimum wage can cause job loss. Some minimum wage advocates will acknowledge that (some) job loss is an acceptable trade-off in return for higher incomes for (some) workers. But to be seen as responsible for causing that job loss by supporting a minimum wage increase may be political suicide. This suggests that there are better and worse times, from a politician’s perspective, for tinkering with the minimum wage.
Hiking the minimum wage during boom periods is more politically palatable because favorable macroeconomic conditions can mask some of the resulting unemployment. During the boom, overall prices and wages are rising, which means the real (inflation-adjusted) minimum wage rate is falling. That, in turn, means it’s less binding, and there will be less job loss than otherwise.
To therefore capture the total effects of minimum wage legislation on jobs, it’s necessary to deploy counterfactual reasoning. Yes, unemployment is low in a boom. But unemployment would have been even lower but for the minimum wage increase. If a study simply compares “before” and “after” where “before” is a “normal” period or even a downturn, while “after” is a boom, then it will be difficult to accurately assess the impact of the minimum wage on job loss.
If it hasn’t been done already, someone should overlay the dates of minimum wage hikes on NBER recession data, as reported by the St. Louis FRED database. My prediction is that hikes tend to occur during booms, rarely during recessions.
These five ways that studies can fail to detect minimum wage-induced employment effects are far from exhaustive, nor does this essay plumb these five exhaustively. Further, how these five reasons play out, how they interact, and which are relevant to which studies are questions that must be evaluated on a case-by-case basis. But we’d all do well to keep these five points in mind the next time we see a study touting that when it comes to the minimum wage there is, in fact, a free lunch.