Vienna Woods Law & Economics

Blog focused on issues in law, economics, and public policy.

“What Does King v. Burwell Have to Do with the Antitrust Rule of Reason? A Lot” by Theodore A. Gebhard — July 15, 2015

“What Does King v. Burwell Have to Do with the Antitrust Rule of Reason? A Lot” by Theodore A. Gebhard

The first Justice John Marshall Harlan is probably best remembered for being the sole dissenter in Plessy v. Ferguson, the notorious 1896 Supreme Court decision that found Louisiana’s policy of “separate but equal” accommodations for blacks and whites to satisfy the equal protection requirements of the 14th Amendment.  Harlan, a strict textualist, saw no color distinctions in the plain language of the 14th Amendment or anywhere else in what he described as a color-blind Constitution.  Harlan’s textualism did not end there, however.  It was also evident fifteen years later in one of the most famous and impactful antitrust cases in Supreme Court history, Standard Oil Co. of New Jersey v. U.S. The majority opinion in that case, in important respects, mirrored Chief Justice John Roberts’ reasoning in King v. Burwell.  Like King, the majority opinion in Standard Oil was written by the Chief Justice, Edward White in this instance, and in both cases, the majority reasoned that Congress did not actually mean what the clear and plain words of the statute at issue said.  Although concurring in the narrow holding of liability, Justice Harlan in Standard Oil, as Justice Antonin Scalia in his dissent in King, criticized forcefully what he believed to be the majority’s rank display of judicial legislation and usurpation of Congress’s function to fix statutes that may otherwise have harsh policy consequences.  Indeed, Standard Oil demonstrates that both Chief Justice Roberts and Justice Scalia had ample precedent in Supreme Court history.

The Standard Oil case was about whether John D. Rockefeller’s corporate empire violated the Sherman Antitrust Act, enacted 21 years earlier in 1890 and which prohibited monopolization, attempted monopolization, and “every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce .”  Harlan believed that under the facts of the case, liability could be found within the plain language of the statute.  The majority likewise found that Standard Oil violated the Act, but did so by dint of construing the Act in a way that the Court had previously rejected on several occasions.  Specifically, Chief Justice White used the opportunity to read into the Sherman Act the common law principle of “reasonableness” such that only “unreasonable” restraints of trade would be illegal.  That is, the Court rewrote the statute to say in effect, “every unreasonable contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade” is prohibited. In so doing, the Court, by judicial fiat, discarded the plain language of the statute and injected the so-called “rule of reason” into antitrust doctrine.

Notwithstanding that the Court had previously found otherwise, Chief Justice White found that the 51rst Congress must have had in mind the common law focus on unreasonable restraints in trade when it drafted the Sherman Act.  Otherwise, he believed, the operation of the statute could give discordant results.  The fact that the Congress did not make this qualification explicit was of no matter; White’s clairvoyance was sufficient to discern and correct the textual oversight and Congress’s true intent.  Harlan, however, saw this as unwarranted judicial activism and harmful appropriation of the “constitutional functions of the legislative branches of government.”  Echoing today’s concerns about judicial overreach, Harlan worried that this constitutionally unauthorized usurpation of legislative power “may well cause some alarm for the integrity of our institutions.”

Moreover, in his long and detailed concurrence, Harlan forcefully argued that it is not the Court’s function to change the plain meaning of statutes, whether or not that meaning reflects actual legislative intent.  That is, a judge’s role is to look only at the four corners of a statute, and no more.  It is up to the legislature to fix a statute, if necessary, not the judge.  This principle was even more applicable in the case at hand.  Here, Harlan believed, the plain language of the Act did in fact reflect the actual legislative intent.  Thus, the majority’s contrary position was even more egregious.  That is, the majority simply substituted its preferred reading of the statute 21 years after the fact, notwithstanding contrary contemporaneous evidence.

In this regard, Harlan pointed out that in 1890 the Congress was especially alarmed about growing concentrations of wealth, aggregation of capital among a few individuals, and economic power, all arising from the rapid industrialization that the United States had been experiencing over the previous decades.  Congress, in keeping with the spirit of the age, saw this changing economic climate as requiring bold new law focused on checking the power of trusts.  Specifically, the new climate “must be met firmly and by such statutory regulations as would adequately protect the people against oppression and wrong.”  For this reason, the 1890 Congress, in drafting the Sherman Act, intentionally abandoned common law principles as being too weak to deal with the economic circumstances of the day.  In addition, the Congress wrote criminal sanctions and third-party rights of action into the Act, none of which were a part of the common law.

Finally, Harlan pointedly explained that the Court had itself previously found in a well-known 1896 decision, U.S. v. Trans-Missouri Freight Assn., and reaffirmed in several later decisions that the Act’s prohibitions were not limited only to unreasonable restraints of trade, as that term is understood in the common law.  The first of these decisions, moreover, was based on far greater proximity to the time of the Act than the current case, and if the Congress thought the Court to be wrong, it had at least 15 years to correct the Court on this issue, but failed to do so, indicating that it approved of the Court’s construction.  Harlan thus saw White’s reversal of these holdings as no more than “an invasion by the judiciary of the constitutional domain of Congress — an attempt by interpretation to soften or modify what some regard as a harsh public policy.”

The activism of Chief Justice White in Standard Oil and nearly all of Justice Harlan’s concerns re-emerge in King v. Burwell.  In King, the principal issue was whether, under the Affordable Care and Patient Protection Act, an “Exchange” (an insurance marketplace) established by the federal government through the Secretary of Health and Human Services should be treated as an “Exchange” established by a state.  The question is important because under the ACAPA, an insurance exchange must be established in each state.  The statute provides, however, that if a state fails to establish such an exchange, the Secretary of H.H.S. will step in and establish a federally run exchange in that state.  The statute further provides that premium assistance will be available to lower income individuals to subsidize their purchase of health insurance when such insurance is purchased through an “Exchange established by the State.”  The Act defines “State” to mean each of the 50 United States plus the District of Columbia.  The plain language of the statute therefore precludes premium assistance to individuals purchasing health insurance on a federally run exchange.

Notwithstanding the plain language of the Act, however, Chief Justice Roberts, writing for the majority, held that premium assistance is available irrespective of whether the relevant exchange was established by a state or the Secretary.  In effect, the Chief Justice rewrote the pertinent clause, “Exchange established by the State,” to read instead “Exchange established by the State or the Federal Government.”

Much like Chief Justice White more than a century earlier, Chief Justice Roberts reasoned that the Congress could not have actually meant what the plain text of the Act said and that if this drafting oversight were not corrected by the Court, serious discordant consequences would result.  Also, like his predecessor, Chief Justice Roberts came to this conclusion despite evidence suggesting that the plain language is exactly what Congress intended.  According to the now public remarks of Jonathan Gruber, a chief architect of the Act, by limiting premium assistance only to purchases made on state-established exchanges, the Congress intended to create an incentive for each state to establish an exchange.  Even so, the Chief Justice discerned otherwise (perhaps because in hindsight the incentive did not work and, as a result, the consequences to the operation of the Act will be severe) and held that Congress must have intended “Exchange” for purposes of premium assistance to encompass both state and federal-established exchanges.  That is, just as Chief Justice White found, 21 years after its passage, that the plain text of the Sherman Act did not contain the full intended meaning of the words in the Act, Chief Justice Roberts similarly found the plain text of the ACAPA to fall short of its true meaning, notwithstanding that Congress did nothing to change the text since its 2010 enactment.

The parallel between the two cases does not stop with the majority opinions.  In King, Justice Scalia, a textualist like Justice Harlan, echoed the same concerns that Harlan had in Standard Oil. In his dissent, Scalia states, for example, that [t]he Court’s decision reflects the philosophy that judges should endure whatever interpretive distortions it takes in order to correct a supposed flaw in the statutory machin­ery.  That philosophy ignores the American people’s deci­sion to give Congress all legislative Powers enumerated in the Constitution. … We lack the prerogative to repair laws that do not work out in practice. … ‘If Congress enacted into law something different from what it intended, then it should amend the statute to conform to its intent.’”  That is, it is not up to the Court to usurp the legislative functions of Congress in order to fix the unintended consequences of a statute. Scalia goes on, “‘this Court has no roving license to disregard clear language simply on the view that Congress must have intended something broader.’”  Scalia concludes by suggesting that, to the detriment of “honest jurisprudence,” the majority “is prepared to do whatever it takes to uphold and assist [the laws it favors].”

So we can only conclude that the controversy surrounding Chief Justice Robert’s reasoning in King is anything but new.  Textualists have been sounding alarms about judicial overreach for decades.  Whether or not one believes that Chief Justice Roberts assumed a proper judicial role, it is undeniable that he had precedent for doing what he did.  Similarly, it is undeniable that Justice Scalia’s concerns are well grounded in Court history.  One other certainty is that just as the judicial creation of the “rule of reason” has had a significant impact on the administration of antitrust law in the last 100-plus years, Chief Justice Robert’s rewrite of the ACAPA will have a lasting impact, not only on the U.S. health insurance system, but in sustaining the self-authorized prerogatives of judges.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.

Advertisement
“Forecasting Trends in Highly Complex Systems: A Case for Humility” by Theodore A. Gebhard — June 20, 2015

“Forecasting Trends in Highly Complex Systems: A Case for Humility” by Theodore A. Gebhard

One can readily cite examples of gross inaccuracies in government macroeconomic forecasting.  Some of these inaccurate forecasts have been critical to policy formation that ultimately produced unintended and undesirable results.  (See, e.g., Professor Edward Lazear, “Government Forecasters Might as Well Use a Ouija Board,” Wall Street Journal, Oct. 16, 2014)  Likewise, the accuracy of forecasts of long-term global warming is coming under increasing scrutiny, at least among some climate scientists.  Second-looks are suggesting that climate science is anything but “settled.” (See, e.g., Dr. Steven Koonin, “Climate Science and Interpreting Very Complex Systems,” Wall Street Journal, Sept. 20, 2014)  Indeed, there are legitimate concerns about the ability to forecast directions in the macro-economy or long-term climate change reliably.  These concerns, in turn, argue for government officials, political leaders, and others to exercise a degree of humility when calling for urgent government action in either of these areas.  Without such humility, there is the risk of jumping into long-term policy commitments that may in the end prove to be substantially more costly than beneficial.

A common factor in macroeconomic and long-term climate forecasting is that both deal with highly complex systems.   When modeling such systems, attempts to capture all of the important variables believed to have a significant explanatory effect on the forecast prove to be incredibly difficult, if not entirely a fool’s errand.  Not only are there are many known candidates, there are likely many more unknown candidates.  In addition, specifying functional forms that accurately represent the relationships between the explanatory variables is similarly elusive.  Simple approximations based on theory are probably the best that can be achieved.  Failure to solve these problems — omitting important explanatory variables and incorrect functional forms – will seriously confound the statistical reliability of the estimated coefficients and, hence, any forecasts made from those estimates.

Inherent in macroeconomic forecasting is an additional complication.  Unlike models of the physical world where the data are insentient and relationships among variables are fixed in nature, computer models of the economy depend on data samples generated by motivated human action and relationships among variables that are anything but fixed over time.  Human beings have preferences, consumption patterns, and levels of risk acceptance that regularly change.  This constant change makes coefficient estimates derived from historical data prone to being highly unsound bases on which to forecast the future.  Moreover, there is little hope for improved reliability over time so long as human beings remain sentient actors.

By contrast, models of the physical world, such as climate science models, rely on unmotivated data and relationships among variables that are fixed in nature.  Unlike human beings, carbon dioxide molecules do not have changing tastes or preferences.  At least in principle, as climate science advances over time with better data quality, better identification of explanatory variables, and better understanding of the relationships among those variables, the forecasting accuracy of climate change models should improve.   Notwithstanding this promise, however, long-term climate forecasts remain problematic at present.  (See Koonin article linked above.)

Given the difficulty of modeling highly complex systems, it would seem that recent statements by some of our political, economic, and even religious leaders are overwrought.  President Obama and Pope Francis, for example, have claimed that climate change is among mankind’s most pressing problems.  (See here and here.)  They arrived at their views by dint of forecasts that predict significant climate change owing to human activity.  Each has urged that developed nations take dramatic steps to alter their energy mixes.  Similarly, the world’s central bankers, such as those at the Federal Reserve, the European Central Bank, the Bank of Japan, and the International Monetary Fund regularly claim that their historically aggressive policies in the aftermath of the 2008 financial crisis are well grounded in what their elaborate computer models generate and, hence, are necessary and proper for the times.  Therefore, any attempts to modify the independence of these institutions to pursue those policies should be resisted, notwithstanding that the final outcome of these historic and unprecedented policies is yet unknown.

It is simply not possible, however, to have much confidence in any of these claims.   The macroeconomic and climate systems are too complex to be captured well in any computer model, and forecasts derived from such models therefore are highly suspect.  At the least, a prudent level of humility and a considerable degree of caution are in order among government planners, certainly before they pursue policies that risk irreversible unintended, and potentially very costly, consequences.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.

“Is Economics a Science?” by Theodore A. Gebhard — May 15, 2015

“Is Economics a Science?” by Theodore A. Gebhard

The great 20th Century philosopher of science, Karl Popper, famously defined a scientific question as one that can be framed as a falsifiable hypothesis.  Economics cannot satisfy that criterion.  No matter the mathematical rigor and internal logic of any theoretical proposition in economics, empirically testing it by means of econometrics necessarily requires that the regression equations contain stochastic elements to account for the complexity that characterizes the real world economy.  Specifically, the stochastic component accounts for all of the innumerable unknown and unmeasurable factors that cannot be precisely identified but nonetheless influence the economic variable being studied or forecasted.

What this means is that economists need never concede that a theory is wrong when their predictions fail to materialize.  There is always the ready excuse that the erroneous predictions were the fault of “noise” in the data, i.e., the stochastic component, not the theory itself.  It is hardly surprising then that economic theories almost never die and, even if they lie dormant for a while, find new life whenever proponents see opportunities to resurrect their pet views.  Since the 2008 financial crisis, even Nobel Prize winners can be seen dueling over macroeconomic policy while drawing on theories long thought to be buried.

A further consequence of the inability to falsify an economic theory is that economics orthodoxy is likely to survive indefinitely irrespective of its inability to generate reliable predictions on a consistent basis.  As Thomas Kuhn, another notable 20th Century philosopher of science, observed, scientific orthodoxy periodically undergoes revolutionary change whenever a critical mass of real world phenomena can no longer be explained by that orthodoxy.  The old orthodoxy must give way, and a new orthodoxy emerges.  Physics, for example, has undergone several such periodic revolutions.

It is clear, however, that, because economists never have to admit error in their pet theories, economics is not subject to a Kuhnian revolution.  Although there is much reason to believe that such a revolution is well overdue in economics, graduate student training in core neoclassical theory persists and is likely to persist for the foreseeable future, notwithstanding its failure to predict the events of 2008.  There are simply too few internal pressures to change the established paradigm.

All of this is of little consequence if mainstream economists simply talk to one another or publish their econometric estimates in academic journals merely as a means to obtain promotion and tenure.  The problem, however, is that the cachet of a Nobel Prize in Economic Science and the illusion of scientific method permit practitioners to market their pet ideological values as the product of science and to insert themselves into policy-making as expert advisors.  Significantly in this regard, econometric modeling is no longer chiefly confined to generating macroeconomic forecasts.  Increasingly, econometric forecasts are used as inputs into microeconomic policy-making affecting specific markets or groups and even are introduced as evidence in courtrooms where specific individual litigants have much at stake.  However, most policy-makers — let alone judges, lawyers, and other lay consumers of those forecasts — are not well-equipped to evaluate their reliability or to assign appropriate weight to them.  This situation creates the risk that value-laden theories and unreliable econometric predictions play a larger role in microeconomic policy-making, just as in macroeconomic policy-making, than can be justified by purported “scientific” foundation.

To be sure, economic theories can be immensely valuable in focusing one’s thinking about the economic world.  As Friedrich Hayek taught us, however, although good economics can say a lot about tendencies among economic variables (an important achievement), economics cannot do much more.  As such, the naive pursuit of precision by means of econometric modeling —  especially as applied to public policy — is fraught with danger and can only deepen well-deserved public skepticism about economists and economics.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.

“Economics and Transparency in Antitrust Policy” By Theodore A. Gebhard — April 28, 2015

“Economics and Transparency in Antitrust Policy” By Theodore A. Gebhard

A significant turning point in antitrust thinking began in the mid-1970s with the formal integration of microeconomic analysis into both antitrust policy and antitrust litigation.  At that time, the Department of Justice and the Federal Trade Commission dramatically expanded their in-house economics staffs and ever since have increasingly relied on those staffs for strategic advice as well as technical analysis in policy and litigation.

For the most part, this integration of economics into antitrust thinking has been highly positive.  It has been instrumental to ensuring that the antitrust laws focus on what they are intended to do – promote consumer welfare.   Forty years later, however, economics has gone beyond its role as the intellectual undergirding of antitrust policy.  Today, no litigant tries an antitrust case without utilizing one or more economists as expert witnesses, as economic analysis has become the dominant evidence in antitrust enforcement.  In this regard, the pendulum may have swung too far.

Prior to the mid-1970s, economists, though creating a sizable academic literature, were largely absent in setting antitrust policy and rarely participated in litigation.  The result was that, for much of the history of antitrust, the enforcement agencies and the courts often condemned business practices that intuitively looked bad, but without much further consideration.  Good economics, however, is sometimes counter-intuitive.  Many of these older decisions did more to protect competitors from legitimate competition than protect competition itself.  Integrating sound economic thinking into enforcement policy was thus an important corrective.

Economic thinking has been most impactful on antitrust policy in two areas: unilateral business conduct and horizontal mergers.  Older antitrust thinking often conflated protecting competitors with protecting competition.  The most devastating critique of this confusion came from the so-called “Chicago School” of economics, and manifested itself to the larger antitrust legal community through Robert Bork’s seminal 1978 book, The Antitrust Paradox.  It is hard to exaggerate the impact that this book had on enforcement policy and on the courts.  Today, it is rare that unilateral conduct is challenged successfully, the courts having placed a de facto presumption of legality on such conduct and a heavy burden on plaintiffs to show otherwise.

Horizontal merger policy likewise had a checkered history prior to the mid-1970s.  Basically, any merger that increased market concentration, even if only slightly, was considered bad.  The courts by and large rubber-stamped this view.  This rigid thinking began to change, however, with the expanded roles of the economists at the DOJ and FTC.  The economists pointed out that, although change in market concentration is important, it is not dispositive in assessing whether a merger is anticompetitive.  Other factors must be considered such as the incentives for outside firms to divert existing capacity into the relevant market, the degree to which there are barriers to the entry of new capacity, the potential for the merger to create efficiencies, and the ability of post-merger firms to coordinate pricing.  Consideration of each of these economic factors was eventually formalized in merger guidelines issued in 1982 by the Reagan Administration’s DOJ.  These guidelines were joined by the FTC ten years later and amended to consider mergers that might be anticompetitive regardless of firms’ ability to coordinate prices.

Each of these developments led to far more sensible antitrust policy over the past four decades.  Today, however, economic thinking no longer merely provides broad policy guidance but, in the form of highly sophisticated statistical modeling, increasingly serves to be the principal evidence in specific cases.  Here, policy-making may now be exceeding the limits of economic science.  Friedrich Hayek famously described the difference between science and scientism, noting the pretentiousness of believing that economics can generate the kind of precision that the natural sciences can.  Yet, the enforcement agencies are approaching a point where their econometric analysis of market data in certain instances may be considered sufficiently “scientific” to determine enforcement decisions without needing to know much else about the businesses or products at issue.

Much of this is driven by advancements in cheap computing coincident with the widespread adoption of electronic data storage by businesses.  These developments have yielded a rich set of market data that can be readily obtained by subpoena, coupled with the ability to use that data as input into econometric estimation that can be done cheaply on a desktop.  So, for example, if it is possible to estimate the competitive effects of a merger directly, why bother with more traditional (and tedious) methodology that includes defining relevant markets and calculating concentration indexes?  In principle, even traditional documentary and testimonial evidence might be dispensed with, being unnecessary when there is hard “scientific” evidence available.

This view is worrisome for two reasons:  The first is the already stated Hayekian concern about the pretense of economic precision.  Any good statistician will tell you that econometrics is as much art as science.  Apart from this concern, however, an equally important worry is that antitrust enforcement policy is becoming too arcane in its attempt to be ever more economically sophisticated.  This means that it is increasingly difficult for businesspersons and their counsel to evaluate whether some specific conduct or transaction could be challenged, thus making even lawful business strategies riskier.  A basic principle of the rule of law is that the law must be understandable to those subject to it.

Regrettably, the Obama Administration has exacerbated this problem.  For example, some officials have indicated sympathy for so-called “Post-Chicago Economics,” whose proponents have set out highly stylized models that purport to show the possibility of anticompetitive harm from conduct that has not yet been reached by antitrust law.  Administration officials also rescinded a Bush Administration 2008 report that attempted to lay out clearer guidelines regarding when unilateral conduct might be challenged.  Although these developments have been mostly talk and not much action in the way of bringing novel cases, even mere talk increases legal uncertainty.

The Administration’s merger policy actions are more concrete.  The DOJ and FTC issued new guidelines in 2010 that, in an effort to be even more comprehensive, proliferated the number of variables that can be considered in merger analysis.  In some instances, these variables will be resistant to reliable measurement and relative weighting.  The consequence is that the new guidelines largely defeat the purpose of having guidelines – helping firms assess whether a prospective merger will be challenged.  Thus, firms considering a merger must often do so in the face of substantially more legal uncertainty and must also expend substantial funds on attorneys and consultants to navigate the maze of the guidelines. These factors likely deter at least some procompetitive mergers, thus forgoing potential social gains.

Antitrust policy certainly must remain grounded in good economics, and economic analysis is certainly probative evidence in individual cases.  But it is nonetheless appropriate to keep in mind that no legal regime can achieve perfection, and the marginal benefits from efforts to obtain ever greater economic sophistication must be weighed against the marginal costs of doing so.  When litigation devolves into simply a battle of expert witnesses whose testimony is based on arcane modeling that neither judges nor business litigants grasp well, something is wrong.

It is time to consider a modest return to simpler and more transparent enforcement policy that relies less on black box economics that pretends to be more scientific than it really is.  To be sure, clearer enforcement rules would not be without enforcement risk.  Some anticompetitive transactions could escape challenge.  But, procompetitive transactions that otherwise might have been deterred will be a social gain.  Moreover, substantial social cost savings can be expected when business decisions are made under greater legal clarity, when antitrust enforcement is administered more efficiently, and when litigation costs are substantially lower.  The goal of antitrust policy should not be perfection, but to maintain an acceptable level of workable competition within markets while minimizing the costs of doing so.  Simpler, clearer rules are the route to this end.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.