Vienna Woods Law & Economics

Blog focused on issues in law, economics, and public policy.

Kelo v. City of New London after Ten Years — Book Review by Joel C. Mandelman — November 6, 2015

Kelo v. City of New London after Ten Years — Book Review by Joel C. Mandelman

The Grasping Hand: Kelo v. City of New London & the Limits of Eminent Domain by Ilya Somin, University of Chicago Press – 2015

In his book, The Grasping Hand: New London and the Limits of Eminent Domain, Ilya Somin, a law professor at George Mason University Law School, traces the history of eminent domain and 5th Amendment takings from colonial times to the 2005 Supreme Court decision in Kelo v City of New London.  This is not a minor intellectual topic. The protection of the rights of private property owners is a cornerstone of any free country and of any free market economy.  In the United States, the 5th Amendment to the Constitution provides that, “private property shall [not] be taken for public use without payment of just compensation” (emphasis added.)  Notwithstanding these limiting words, the U.S. Supreme Court has sanctioned the government (primarily local and state governments) taking of private property and its transfer to other private parties whose use of it may be of little benefit to the general public.

The most recent example of this disfigurement of the Constitution was found in New London, Connecticut where, in 2005, the Court sanctioned the seizure of several private homes so that New London could transfer the property to the Pfizer pharmaceutical corporation in order to build a new corporate headquarters, a luxury hotel, and a corporate conference center.  The principal authority for this seizure was a 1954 Supreme Court ruling in Berman v Parker in which the Court held that a seizure need not be for a traditional public purpose, i.e. a school, a hospital, or a highway, but that the seizure merely have a “public purpose” such as producing greater tax revenues by ending “urban blight”.  (Whether the seized property was, in fact, blighted is left to the judicially unreviewed discretion of the government agency seizing the property.)  The saddest irony of the Kelo case was that, after all of the litigation, and the loss of scores of homes, Pfizer changed its mind and never built the planned corporate park and hotel.

The vast majority of seizures — since the Berman decision more than 80 percent of all private property condemnations — have been for such non-public uses such as building sports arenas, corporate office space, and general urban renewal and not for traditional public uses such as building schools or highways.  The Supreme Court’s rationalization has always been that since “everyone benefits” from a neighborhood being generally improved, there is an inherent “public purpose” to the seizure. Thus, even if the seizure was not for a traditional public use, it is sanctioned by the 5th Amendment.  That this is not what the plain language of the 5th Amendment states seems to have escaped the Court’s notice.

As Somin discusses, courts have been reluctant to second guess what a state legislature or a city council thinks is “blight” requiring government action to eliminate it.  This judicial reluctance is a moral and constitutional cop-out. There has been no reluctance on the part of federal or state judges to second guess the legitimacy of search warrants, confessions, the providing of a fair trial (i.e. due process of law), the scope of government limitations on freedom of speech or the press, the scope of the right to bear arms or the meaning of the 14th Amendment’s equal protection and due process clauses; so why the reluctance to judge the validity of takings under the 5th Amendment? No explanation is offered, nor have judges ever tried to explain their sudden judicial restraint in this particular area of constitutional law.

Somin traces the history of 5th Amendment takings dating back to post-colonial times when many takings were made for the purpose of building privately owned dams that had a generalized public benefit by creating water power or the construction of privately owned turnpikes.  Many of the more controversial public benefit takings did not start until the New Deal, or thereafter, when urban renewal became all the rage. The trend became the seizure of private property – which typically had housing already on it – in order to use it for the building of massive public housing projects. That many of those public housing projects later became worse slums than the smaller, privately owned “slums” that they “renewed” is discussed at some length in the book. Sadly, no government official, or agency, has ever been held accountable for these widespread, often well publicized, failures. As Somin discusses, much of the impetus for these renewal projects came not from elected legislators or elected executive branch officials but rather from real estate and construction industry developers who were also major sources of campaign funds.

Perhaps the most outrageous example of crony-capitalism seizures of private property was the infamous Poletown case, in which the City of Detroit seized thousands of private homes so that General Motors could build an automobile factory.  The Michigan Supreme Court sanctioned this theft on the grounds that the promised creation of 5,000 new jobs was a public use justifying the seizure of thousands of private homes and businesses. The cruelest irony was that fewer than half of the promised jobs were ever created and, several years later, a newly constituted Michigan Supreme Court partially reversed its Poletown decision in County of Wayne v. Hathcock, allowing takings in “blighted” areas but prohibiting takings for economic development.

The public reaction to the U.S. Supreme Court’s 2005 Kelo decision was swift and harsh. It was widely denounced as a threat to all private property rights. After all, if Kelo were followed to its logical conclusion, there would be nothing to stop the government from seizing 100 private homes so that a private developer could construct a high rise apartment that would pay more in local property taxes.

The problem is that many of the legislative attempts to prevent another Kelo-like case from ever happening again were half-hearted and possibly made in bad faith.  Although state laws were changed to bar 5th Amendment takings for economic development, a gaping – and likely intentional – loophole was left in those statues. There was no prohibition of takings to end undefined, judicially unreviewable, allegations of economic or social “blight.”  Almost any taking barred on economic development grounds could still be “justified” on the grounds that the affected property was “blighted.”

Somin carefully traces and analyzes both the history of the takings clause and the development of the “public purpose versus public use” expansion of its scope to the point where no private property is truly safe from any government bureaucrat or private developer with enough political clout to get its way.  Many of the state laws passed in response to the Kelo decision need to be substantially strengthened and federal law needs to be rewritten to bar illegitimate seizures of private property for other private, or quasi-private uses that primarily benefit the political party controlling the local government and its crony-capitalist allies.

This is an important book that, because of its arcane Constitutional premise, has not received the widespread publicity that it deserves. The Kelo decision and the weak responses to it by many state legislatures have left many citizens with a false sense of security.  Eternal vigilance truly is the price of liberty and greater vigilance, and efforts, are required to prevent Kelo from rearing its ugly, if somewhat shrunken, head again.

Joel C. Mandelman is an attorney practicing in Arlington, Virginia.  He has filed amicus briefs with the U.S. Supreme Court on behalf of Abigail Fisher in her challenge to the University of Texas’ racially preferential admissions policies and on behalf of the State of Michigan in defense of its state constitutional amendment barring all racial preferences in college admissions, government hiring and government contracting.  See the Contributors page for more about Mr. Mandelman.  Email him at joelcm1947@gmail.com.

(Correction, Nov. 8:  An earlier version of this post misstated the holding in Hathcock as fully reversing the Poletown case. Ed.)

(Correction, Nov. 20:  In an email to this site’s administrator, Professor Somin points out that “Pfizer was not going to be the new owner or developer of the condemned property.  As explained in the book, they lobbied for the project and hoped to benefit from it, but were not going to own or develop the land themselves.” Ed.)

Advertisements
“What Does King v. Burwell Have to Do with the Antitrust Rule of Reason? A Lot” by Theodore A. Gebhard — July 15, 2015

“What Does King v. Burwell Have to Do with the Antitrust Rule of Reason? A Lot” by Theodore A. Gebhard

The first Justice John Marshall Harlan is probably best remembered for being the sole dissenter in Plessy v. Ferguson, the notorious 1896 Supreme Court decision that found Louisiana’s policy of “separate but equal” accommodations for blacks and whites to satisfy the equal protection requirements of the 14th Amendment.  Harlan, a strict textualist, saw no color distinctions in the plain language of the 14th Amendment or anywhere else in what he described as a color-blind Constitution.  Harlan’s textualism did not end there, however.  It was also evident fifteen years later in one of the most famous and impactful antitrust cases in Supreme Court history, Standard Oil Co. of New Jersey v. U.S. The majority opinion in that case, in important respects, mirrored Chief Justice John Roberts’ reasoning in King v. Burwell.  Like King, the majority opinion in Standard Oil was written by the Chief Justice, Edward White in this instance, and in both cases, the majority reasoned that Congress did not actually mean what the clear and plain words of the statute at issue said.  Although concurring in the narrow holding of liability, Justice Harlan in Standard Oil, as Justice Antonin Scalia in his dissent in King, criticized forcefully what he believed to be the majority’s rank display of judicial legislation and usurpation of Congress’s function to fix statutes that may otherwise have harsh policy consequences.  Indeed, Standard Oil demonstrates that both Chief Justice Roberts and Justice Scalia had ample precedent in Supreme Court history.

The Standard Oil case was about whether John D. Rockefeller’s corporate empire violated the Sherman Antitrust Act, enacted 21 years earlier in 1890 and which prohibited monopolization, attempted monopolization, and “every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce .”  Harlan believed that under the facts of the case, liability could be found within the plain language of the statute.  The majority likewise found that Standard Oil violated the Act, but did so by dint of construing the Act in a way that the Court had previously rejected on several occasions.  Specifically, Chief Justice White used the opportunity to read into the Sherman Act the common law principle of “reasonableness” such that only “unreasonable” restraints of trade would be illegal.  That is, the Court rewrote the statute to say in effect, “every unreasonable contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade” is prohibited. In so doing, the Court, by judicial fiat, discarded the plain language of the statute and injected the so-called “rule of reason” into antitrust doctrine.

Notwithstanding that the Court had previously found otherwise, Chief Justice White found that the 51rst Congress must have had in mind the common law focus on unreasonable restraints in trade when it drafted the Sherman Act.  Otherwise, he believed, the operation of the statute could give discordant results.  The fact that the Congress did not make this qualification explicit was of no matter; White’s clairvoyance was sufficient to discern and correct the textual oversight and Congress’s true intent.  Harlan, however, saw this as unwarranted judicial activism and harmful appropriation of the “constitutional functions of the legislative branches of government.”  Echoing today’s concerns about judicial overreach, Harlan worried that this constitutionally unauthorized usurpation of legislative power “may well cause some alarm for the integrity of our institutions.”

Moreover, in his long and detailed concurrence, Harlan forcefully argued that it is not the Court’s function to change the plain meaning of statutes, whether or not that meaning reflects actual legislative intent.  That is, a judge’s role is to look only at the four corners of a statute, and no more.  It is up to the legislature to fix a statute, if necessary, not the judge.  This principle was even more applicable in the case at hand.  Here, Harlan believed, the plain language of the Act did in fact reflect the actual legislative intent.  Thus, the majority’s contrary position was even more egregious.  That is, the majority simply substituted its preferred reading of the statute 21 years after the fact, notwithstanding contrary contemporaneous evidence.

In this regard, Harlan pointed out that in 1890 the Congress was especially alarmed about growing concentrations of wealth, aggregation of capital among a few individuals, and economic power, all arising from the rapid industrialization that the United States had been experiencing over the previous decades.  Congress, in keeping with the spirit of the age, saw this changing economic climate as requiring bold new law focused on checking the power of trusts.  Specifically, the new climate “must be met firmly and by such statutory regulations as would adequately protect the people against oppression and wrong.”  For this reason, the 1890 Congress, in drafting the Sherman Act, intentionally abandoned common law principles as being too weak to deal with the economic circumstances of the day.  In addition, the Congress wrote criminal sanctions and third-party rights of action into the Act, none of which were a part of the common law.

Finally, Harlan pointedly explained that the Court had itself previously found in a well-known 1896 decision, U.S. v. Trans-Missouri Freight Assn., and reaffirmed in several later decisions that the Act’s prohibitions were not limited only to unreasonable restraints of trade, as that term is understood in the common law.  The first of these decisions, moreover, was based on far greater proximity to the time of the Act than the current case, and if the Congress thought the Court to be wrong, it had at least 15 years to correct the Court on this issue, but failed to do so, indicating that it approved of the Court’s construction.  Harlan thus saw White’s reversal of these holdings as no more than “an invasion by the judiciary of the constitutional domain of Congress — an attempt by interpretation to soften or modify what some regard as a harsh public policy.”

The activism of Chief Justice White in Standard Oil and nearly all of Justice Harlan’s concerns re-emerge in King v. Burwell.  In King, the principal issue was whether, under the Affordable Care and Patient Protection Act, an “Exchange” (an insurance marketplace) established by the federal government through the Secretary of Health and Human Services should be treated as an “Exchange” established by a state.  The question is important because under the ACAPA, an insurance exchange must be established in each state.  The statute provides, however, that if a state fails to establish such an exchange, the Secretary of H.H.S. will step in and establish a federally run exchange in that state.  The statute further provides that premium assistance will be available to lower income individuals to subsidize their purchase of health insurance when such insurance is purchased through an “Exchange established by the State.”  The Act defines “State” to mean each of the 50 United States plus the District of Columbia.  The plain language of the statute therefore precludes premium assistance to individuals purchasing health insurance on a federally run exchange.

Notwithstanding the plain language of the Act, however, Chief Justice Roberts, writing for the majority, held that premium assistance is available irrespective of whether the relevant exchange was established by a state or the Secretary.  In effect, the Chief Justice rewrote the pertinent clause, “Exchange established by the State,” to read instead “Exchange established by the State or the Federal Government.”

Much like Chief Justice White more than a century earlier, Chief Justice Roberts reasoned that the Congress could not have actually meant what the plain text of the Act said and that if this drafting oversight were not corrected by the Court, serious discordant consequences would result.  Also, like his predecessor, Chief Justice Roberts came to this conclusion despite evidence suggesting that the plain language is exactly what Congress intended.  According to the now public remarks of Jonathan Gruber, a chief architect of the Act, by limiting premium assistance only to purchases made on state-established exchanges, the Congress intended to create an incentive for each state to establish an exchange.  Even so, the Chief Justice discerned otherwise (perhaps because in hindsight the incentive did not work and, as a result, the consequences to the operation of the Act will be severe) and held that Congress must have intended “Exchange” for purposes of premium assistance to encompass both state and federal-established exchanges.  That is, just as Chief Justice White found, 21 years after its passage, that the plain text of the Sherman Act did not contain the full intended meaning of the words in the Act, Chief Justice Roberts similarly found the plain text of the ACAPA to fall short of its true meaning, notwithstanding that Congress did nothing to change the text since its 2010 enactment.

The parallel between the two cases does not stop with the majority opinions.  In King, Justice Scalia, a textualist like Justice Harlan, echoed the same concerns that Harlan had in Standard Oil. In his dissent, Scalia states, for example, that [t]he Court’s decision reflects the philosophy that judges should endure whatever interpretive distortions it takes in order to correct a supposed flaw in the statutory machin­ery.  That philosophy ignores the American people’s deci­sion to give Congress all legislative Powers enumerated in the Constitution. … We lack the prerogative to repair laws that do not work out in practice. … ‘If Congress enacted into law something different from what it intended, then it should amend the statute to conform to its intent.’”  That is, it is not up to the Court to usurp the legislative functions of Congress in order to fix the unintended consequences of a statute. Scalia goes on, “‘this Court has no roving license to disregard clear language simply on the view that Congress must have intended something broader.’”  Scalia concludes by suggesting that, to the detriment of “honest jurisprudence,” the majority “is prepared to do whatever it takes to uphold and assist [the laws it favors].”

So we can only conclude that the controversy surrounding Chief Justice Robert’s reasoning in King is anything but new.  Textualists have been sounding alarms about judicial overreach for decades.  Whether or not one believes that Chief Justice Roberts assumed a proper judicial role, it is undeniable that he had precedent for doing what he did.  Similarly, it is undeniable that Justice Scalia’s concerns are well grounded in Court history.  One other certainty is that just as the judicial creation of the “rule of reason” has had a significant impact on the administration of antitrust law in the last 100-plus years, Chief Justice Robert’s rewrite of the ACAPA will have a lasting impact, not only on the U.S. health insurance system, but in sustaining the self-authorized prerogatives of judges.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.

“Forecasting Trends in Highly Complex Systems: A Case for Humility” by Theodore A. Gebhard — June 20, 2015

“Forecasting Trends in Highly Complex Systems: A Case for Humility” by Theodore A. Gebhard

One can readily cite examples of gross inaccuracies in government macroeconomic forecasting.  Some of these inaccurate forecasts have been critical to policy formation that ultimately produced unintended and undesirable results.  (See, e.g., Professor Edward Lazear, “Government Forecasters Might as Well Use a Ouija Board,” Wall Street Journal, Oct. 16, 2014)  Likewise, the accuracy of forecasts of long-term global warming is coming under increasing scrutiny, at least among some climate scientists.  Second-looks are suggesting that climate science is anything but “settled.” (See, e.g., Dr. Steven Koonin, “Climate Science and Interpreting Very Complex Systems,” Wall Street Journal, Sept. 20, 2014)  Indeed, there are legitimate concerns about the ability to forecast directions in the macro-economy or long-term climate change reliably.  These concerns, in turn, argue for government officials, political leaders, and others to exercise a degree of humility when calling for urgent government action in either of these areas.  Without such humility, there is the risk of jumping into long-term policy commitments that may in the end prove to be substantially more costly than beneficial.

A common factor in macroeconomic and long-term climate forecasting is that both deal with highly complex systems.   When modeling such systems, attempts to capture all of the important variables believed to have a significant explanatory effect on the forecast prove to be incredibly difficult, if not entirely a fool’s errand.  Not only are there are many known candidates, there are likely many more unknown candidates.  In addition, specifying functional forms that accurately represent the relationships between the explanatory variables is similarly elusive.  Simple approximations based on theory are probably the best that can be achieved.  Failure to solve these problems — omitting important explanatory variables and incorrect functional forms – will seriously confound the statistical reliability of the estimated coefficients and, hence, any forecasts made from those estimates.

Inherent in macroeconomic forecasting is an additional complication.  Unlike models of the physical world where the data are insentient and relationships among variables are fixed in nature, computer models of the economy depend on data samples generated by motivated human action and relationships among variables that are anything but fixed over time.  Human beings have preferences, consumption patterns, and levels of risk acceptance that regularly change.  This constant change makes coefficient estimates derived from historical data prone to being highly unsound bases on which to forecast the future.  Moreover, there is little hope for improved reliability over time so long as human beings remain sentient actors.

By contrast, models of the physical world, such as climate science models, rely on unmotivated data and relationships among variables that are fixed in nature.  Unlike human beings, carbon dioxide molecules do not have changing tastes or preferences.  At least in principle, as climate science advances over time with better data quality, better identification of explanatory variables, and better understanding of the relationships among those variables, the forecasting accuracy of climate change models should improve.   Notwithstanding this promise, however, long-term climate forecasts remain problematic at present.  (See Koonin article linked above.)

Given the difficulty of modeling highly complex systems, it would seem that recent statements by some of our political, economic, and even religious leaders are overwrought.  President Obama and Pope Francis, for example, have claimed that climate change is among mankind’s most pressing problems.  (See here and here.)  They arrived at their views by dint of forecasts that predict significant climate change owing to human activity.  Each has urged that developed nations take dramatic steps to alter their energy mixes.  Similarly, the world’s central bankers, such as those at the Federal Reserve, the European Central Bank, the Bank of Japan, and the International Monetary Fund regularly claim that their historically aggressive policies in the aftermath of the 2008 financial crisis are well grounded in what their elaborate computer models generate and, hence, are necessary and proper for the times.  Therefore, any attempts to modify the independence of these institutions to pursue those policies should be resisted, notwithstanding that the final outcome of these historic and unprecedented policies is yet unknown.

It is simply not possible, however, to have much confidence in any of these claims.   The macroeconomic and climate systems are too complex to be captured well in any computer model, and forecasts derived from such models therefore are highly suspect.  At the least, a prudent level of humility and a considerable degree of caution are in order among government planners, certainly before they pursue policies that risk irreversible unintended, and potentially very costly, consequences.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.

“Is Economics a Science?” by Theodore A. Gebhard — May 15, 2015

“Is Economics a Science?” by Theodore A. Gebhard

The great 20th Century philosopher of science, Karl Popper, famously defined a scientific question as one that can be framed as a falsifiable hypothesis.  Economics cannot satisfy that criterion.  No matter the mathematical rigor and internal logic of any theoretical proposition in economics, empirically testing it by means of econometrics necessarily requires that the regression equations contain stochastic elements to account for the complexity that characterizes the real world economy.  Specifically, the stochastic component accounts for all of the innumerable unknown and unmeasurable factors that cannot be precisely identified but nonetheless influence the economic variable being studied or forecasted.

What this means is that economists need never concede that a theory is wrong when their predictions fail to materialize.  There is always the ready excuse that the erroneous predictions were the fault of “noise” in the data, i.e., the stochastic component, not the theory itself.  It is hardly surprising then that economic theories almost never die and, even if they lie dormant for a while, find new life whenever proponents see opportunities to resurrect their pet views.  Since the 2008 financial crisis, even Nobel Prize winners can be seen dueling over macroeconomic policy while drawing on theories long thought to be buried.

A further consequence of the inability to falsify an economic theory is that economics orthodoxy is likely to survive indefinitely irrespective of its inability to generate reliable predictions on a consistent basis.  As Thomas Kuhn, another notable 20th Century philosopher of science, observed, scientific orthodoxy periodically undergoes revolutionary change whenever a critical mass of real world phenomena can no longer be explained by that orthodoxy.  The old orthodoxy must give way, and a new orthodoxy emerges.  Physics, for example, has undergone several such periodic revolutions.

It is clear, however, that, because economists never have to admit error in their pet theories, economics is not subject to a Kuhnian revolution.  Although there is much reason to believe that such a revolution is well overdue in economics, graduate student training in core neoclassical theory persists and is likely to persist for the foreseeable future, notwithstanding its failure to predict the events of 2008.  There are simply too few internal pressures to change the established paradigm.

All of this is of little consequence if mainstream economists simply talk to one another or publish their econometric estimates in academic journals merely as a means to obtain promotion and tenure.  The problem, however, is that the cachet of a Nobel Prize in Economic Science and the illusion of scientific method permit practitioners to market their pet ideological values as the product of science and to insert themselves into policy-making as expert advisors.  Significantly in this regard, econometric modeling is no longer chiefly confined to generating macroeconomic forecasts.  Increasingly, econometric forecasts are used as inputs into microeconomic policy-making affecting specific markets or groups and even are introduced as evidence in courtrooms where specific individual litigants have much at stake.  However, most policy-makers — let alone judges, lawyers, and other lay consumers of those forecasts — are not well-equipped to evaluate their reliability or to assign appropriate weight to them.  This situation creates the risk that value-laden theories and unreliable econometric predictions play a larger role in microeconomic policy-making, just as in macroeconomic policy-making, than can be justified by purported “scientific” foundation.

To be sure, economic theories can be immensely valuable in focusing one’s thinking about the economic world.  As Friedrich Hayek taught us, however, although good economics can say a lot about tendencies among economic variables (an important achievement), economics cannot do much more.  As such, the naive pursuit of precision by means of econometric modeling —  especially as applied to public policy — is fraught with danger and can only deepen well-deserved public skepticism about economists and economics.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.

“Economics and Transparency in Antitrust Policy” By Theodore A. Gebhard — April 28, 2015

“Economics and Transparency in Antitrust Policy” By Theodore A. Gebhard

A significant turning point in antitrust thinking began in the mid-1970s with the formal integration of microeconomic analysis into both antitrust policy and antitrust litigation.  At that time, the Department of Justice and the Federal Trade Commission dramatically expanded their in-house economics staffs and ever since have increasingly relied on those staffs for strategic advice as well as technical analysis in policy and litigation.

For the most part, this integration of economics into antitrust thinking has been highly positive.  It has been instrumental to ensuring that the antitrust laws focus on what they are intended to do – promote consumer welfare.   Forty years later, however, economics has gone beyond its role as the intellectual undergirding of antitrust policy.  Today, no litigant tries an antitrust case without utilizing one or more economists as expert witnesses, as economic analysis has become the dominant evidence in antitrust enforcement.  In this regard, the pendulum may have swung too far.

Prior to the mid-1970s, economists, though creating a sizable academic literature, were largely absent in setting antitrust policy and rarely participated in litigation.  The result was that, for much of the history of antitrust, the enforcement agencies and the courts often condemned business practices that intuitively looked bad, but without much further consideration.  Good economics, however, is sometimes counter-intuitive.  Many of these older decisions did more to protect competitors from legitimate competition than protect competition itself.  Integrating sound economic thinking into enforcement policy was thus an important corrective.

Economic thinking has been most impactful on antitrust policy in two areas: unilateral business conduct and horizontal mergers.  Older antitrust thinking often conflated protecting competitors with protecting competition.  The most devastating critique of this confusion came from the so-called “Chicago School” of economics, and manifested itself to the larger antitrust legal community through Robert Bork’s seminal 1978 book, The Antitrust Paradox.  It is hard to exaggerate the impact that this book had on enforcement policy and on the courts.  Today, it is rare that unilateral conduct is challenged successfully, the courts having placed a de facto presumption of legality on such conduct and a heavy burden on plaintiffs to show otherwise.

Horizontal merger policy likewise had a checkered history prior to the mid-1970s.  Basically, any merger that increased market concentration, even if only slightly, was considered bad.  The courts by and large rubber-stamped this view.  This rigid thinking began to change, however, with the expanded roles of the economists at the DOJ and FTC.  The economists pointed out that, although change in market concentration is important, it is not dispositive in assessing whether a merger is anticompetitive.  Other factors must be considered such as the incentives for outside firms to divert existing capacity into the relevant market, the degree to which there are barriers to the entry of new capacity, the potential for the merger to create efficiencies, and the ability of post-merger firms to coordinate pricing.  Consideration of each of these economic factors was eventually formalized in merger guidelines issued in 1982 by the Reagan Administration’s DOJ.  These guidelines were joined by the FTC ten years later and amended to consider mergers that might be anticompetitive regardless of firms’ ability to coordinate prices.

Each of these developments led to far more sensible antitrust policy over the past four decades.  Today, however, economic thinking no longer merely provides broad policy guidance but, in the form of highly sophisticated statistical modeling, increasingly serves to be the principal evidence in specific cases.  Here, policy-making may now be exceeding the limits of economic science.  Friedrich Hayek famously described the difference between science and scientism, noting the pretentiousness of believing that economics can generate the kind of precision that the natural sciences can.  Yet, the enforcement agencies are approaching a point where their econometric analysis of market data in certain instances may be considered sufficiently “scientific” to determine enforcement decisions without needing to know much else about the businesses or products at issue.

Much of this is driven by advancements in cheap computing coincident with the widespread adoption of electronic data storage by businesses.  These developments have yielded a rich set of market data that can be readily obtained by subpoena, coupled with the ability to use that data as input into econometric estimation that can be done cheaply on a desktop.  So, for example, if it is possible to estimate the competitive effects of a merger directly, why bother with more traditional (and tedious) methodology that includes defining relevant markets and calculating concentration indexes?  In principle, even traditional documentary and testimonial evidence might be dispensed with, being unnecessary when there is hard “scientific” evidence available.

This view is worrisome for two reasons:  The first is the already stated Hayekian concern about the pretense of economic precision.  Any good statistician will tell you that econometrics is as much art as science.  Apart from this concern, however, an equally important worry is that antitrust enforcement policy is becoming too arcane in its attempt to be ever more economically sophisticated.  This means that it is increasingly difficult for businesspersons and their counsel to evaluate whether some specific conduct or transaction could be challenged, thus making even lawful business strategies riskier.  A basic principle of the rule of law is that the law must be understandable to those subject to it.

Regrettably, the Obama Administration has exacerbated this problem.  For example, some officials have indicated sympathy for so-called “Post-Chicago Economics,” whose proponents have set out highly stylized models that purport to show the possibility of anticompetitive harm from conduct that has not yet been reached by antitrust law.  Administration officials also rescinded a Bush Administration 2008 report that attempted to lay out clearer guidelines regarding when unilateral conduct might be challenged.  Although these developments have been mostly talk and not much action in the way of bringing novel cases, even mere talk increases legal uncertainty.

The Administration’s merger policy actions are more concrete.  The DOJ and FTC issued new guidelines in 2010 that, in an effort to be even more comprehensive, proliferated the number of variables that can be considered in merger analysis.  In some instances, these variables will be resistant to reliable measurement and relative weighting.  The consequence is that the new guidelines largely defeat the purpose of having guidelines – helping firms assess whether a prospective merger will be challenged.  Thus, firms considering a merger must often do so in the face of substantially more legal uncertainty and must also expend substantial funds on attorneys and consultants to navigate the maze of the guidelines. These factors likely deter at least some procompetitive mergers, thus forgoing potential social gains.

Antitrust policy certainly must remain grounded in good economics, and economic analysis is certainly probative evidence in individual cases.  But it is nonetheless appropriate to keep in mind that no legal regime can achieve perfection, and the marginal benefits from efforts to obtain ever greater economic sophistication must be weighed against the marginal costs of doing so.  When litigation devolves into simply a battle of expert witnesses whose testimony is based on arcane modeling that neither judges nor business litigants grasp well, something is wrong.

It is time to consider a modest return to simpler and more transparent enforcement policy that relies less on black box economics that pretends to be more scientific than it really is.  To be sure, clearer enforcement rules would not be without enforcement risk.  Some anticompetitive transactions could escape challenge.  But, procompetitive transactions that otherwise might have been deterred will be a social gain.  Moreover, substantial social cost savings can be expected when business decisions are made under greater legal clarity, when antitrust enforcement is administered more efficiently, and when litigation costs are substantially lower.  The goal of antitrust policy should not be perfection, but to maintain an acceptable level of workable competition within markets while minimizing the costs of doing so.  Simpler, clearer rules are the route to this end.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.

“The FTC’s Supreme Court Victory in N.C. Dental: A Rare Win for Both Libertarians and Regulators” by Theodore A. Gebhard —

“The FTC’s Supreme Court Victory in N.C. Dental: A Rare Win for Both Libertarians and Regulators” by Theodore A. Gebhard

[Originally posted at Gayton Law Blog, April 1, 2015]

The Federal Trade Commission’s (FTC) recent Supreme Court victory in the North Carolina State Board of Dental Examiners (NCSB or Board) case brought together in common cause both economic libertarians and federal antitrust regulators — groups often at odds with each other respecting important philosophical and policy principles. The FTC’s win, however, gave both groups much reason to celebrate.

The question before the Court was whether unilateral anticompetitive actions of the NCSB, a state-created body, were immune from antitrust law under the “state action” doctrine. The state action doctrine arises from Parker v. Brown, a 1943 Supreme Court decision that sought to reconcile the Sherman Antitrust Act with the constitutional principle of federalism. Federalism is the idea that the U.S. Constitution recognizes both national and state government sovereignty by giving certain limited powers to the national government but reserving other powers to the individual states.

Because the Constitution is the highest law and therefore always trumps statutes, the Court carved out immunity from the Sherman Act for anticompetitive actions of states acting in their sovereign capacity, which includes regulating private actors in a way that restricts competition. In 1980 the Court extended this carve out to include the anticompetitive actions of non-sovereign bodies upon a showing that the actions were the result of clearly articulated state policy and were actively supervised by the state. (See, Cal. Liquor Dealers v. Midcal Aluminum, Inc.) The active supervision requirement ensures that the anticompetitive consequences are only those that the state has deliberately chosen to tolerate in exchange for other public policy goals.

The NCSB was established by the North Carolina Dental Act to be “the agency of the State for the regulation of the practice of dentistry.” In that capacity, the NCSB has authority to administer the licensing of dentists and to file suit to enjoin the unlawful practice of dentistry. Starting in 2006, the NCSB began to send strongly worded cease and desist letters to non-dentist providers of teeth whitening services. People in this occupation grew in numbers in North Carolina – as well as other states — as the popularity of these services increased over a period of years. Often the non-dentist providers are simply individual entrepreneurs operating out of kiosks in shopping malls and similar venues. Licensed dentists also provide teeth whitening services, but typically at substantially higher fees.

Significantly, the N.C. Dental Act is silent with respect to whether teeth whitening constitutes the practice of dentistry. Nonetheless, the NCSB determined that it was, though without hearing or comment and without any independent confirmation by any other state official. In so doing, the Board found that the non-dentists were unlawfully practicing dentistry. Instead of obtaining a judicial order to enjoin the non-dentists as prescribed by statute, however, the NCSB sent out cease and desist letters, which contained strong language including a warning that the non-dentist teeth whiteners were engaging in a criminal act. The letters effectively stopped the provision of teeth whitening services by non-dentists.

In 2010 the FTC sued the Board on antitrust grounds. In response, the NCSB asserted that it was entitled to immunity under the state action doctrine. The FTC rejected that claim and in an administrative hearing ruled that the cease and desist letters constituted unlawful concerted action to exclude non-dentist teeth whiteners from the North Carolina market for such services. The FTC further found that that this exclusion resulted in actual anticompetitive effects in the form of less consumer choice and higher prices. The Commission then ordered the NCSB to stop issuing cease and desist letters to non-dentist providers of teeth whitening services without first obtaining a judicial order.

Key to the FTC’s antitrust finding was that, under the N.C. Dental Act, the majority of NCSB members must be practicing dentists elected to the Board by the community of N.C. licensed dentists. Moreover, throughout the relevant period, most, if not all, of the dentist members of the NCSB performed teeth whitening in their respective practices. In addition, the Board’s actions came after it received several complaints from licensed dentists about the competition from non-dentists teeth whiteners and the lower fees that these providers charged. Only a few dentists suggested that teeth whitening by non-dentists might be harmful to customers. The FTC found the validity of such public health claims tenuous.

The NCSB appealed the FTC’s rejection of its state action defense. The appeal reached the Supreme Court in 2014, and in an opinion handed down last February, the Court held that, under the record facts, the NCSB does not have antitrust immunity. In reaching this conclusion, the Court found that, although the NCSB is a creature of the state and could properly be labeled a state agency, it is nonetheless a non-sovereign body and thus subject to the active supervision requirement for antitrust immunity to obtain. This requirement was not satisfied. (Not at issue was whether the state had a clearly articulated policy to regulate the practice of dentistry. All parties stipulated to this factor.)

The Court’s finding that the NCSB is a non-sovereign body is the key to the decision, and rightly focuses on substance over form. In particular, the Court focused on the fact that the NCSB is majority-controlled by active market participants and that its decisions in this case were unsupervised by any state government officials. Given these circumstances, the Court found there to be a high risk that Board decisions were and are influenced by self-interest instead of public welfare. When this risk is present, it trumps any formal label given by a state to a regulatory body. The Court specifically held that a “state board on which a controlling number of decision makers are active market participants in the occupation the board regulates must satisfy [the] active supervision requirement in order to invoke state action antitrust immunity.”

The practical result of this holding is that the FTC’s finding of illegal anticompetitive conduct stands. This outcome will no doubt yield important benefits to North Carolina citizens. Teeth whitening entrepreneurs can seek to re-enter the market, and consumers of those services will enjoy lower fees resulting from the increased competition. These will be tangible, observable benefits.

Critically, however, the Court’s holding also has important legal and policy implications beyond North Carolina. States will have to re-evaluate their regulatory boards and account for the fact that giving unsupervised control over who is qualified to compete to boards comprised of members whose incomes depend on those decisions may not produce good outcomes. Going forward, states must give greater care not only to establishing such boards, but also to overseeing their decisions. Decisions made behind merely the facade of a state-created agency will be insufficient for a board to obtain state action immunity.

Additionally, the Court’s holding recognizes that license requirements that do not rest on firm evidence of a risk to public health absent licensure serve not only to protect incumbents from healthy competition, but unnecessarily infringe on basic economic liberty and the right to earn a living. As such, the holding implicitly elevates economic liberty to a position as prominent as the antitrust concern. In so doing, the holding is an important victory for economic libertarians, just as it is for antitrust enforcers. It is a rare example of an instance when groups with economic philosophies that often diverge can come together in common celebration. A great win for both.

Theodore A. Gebhard advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department antitrust economist, Federal Trade Commission attorney, private practitioner, and economics professor.  Mr. Gebhard holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  Facts or circumstances described in the article may have changed by the time of posting. You can contact the author via email at theodore.gebhard@aol.com.

“Amazon: Bully or Not?” by Theodore A. Gebhard —

“Amazon: Bully or Not?” by Theodore A. Gebhard

[Originally posted at Gayton Law Blog, August 26, 2014]

Several articles in the business press during recent months have reported on a dispute between book publisher, Hachette, and book distributor, Amazon.  The dispute centers on the pricing of e-books.  Amazon wants a larger slice of the profits on e-book sales, and to obtain that larger slice, it wants Hachette to lower its wholesale prices.  Hachette, which publishes James Patterson among other best-selling authors, is resisting.  In turn, Amazon has removed Hachette titles from its pre-order list.  That list is important to publishers because pre-order sales go into the initial sales figures for a new title, better enabling the title to achieve best seller status and the marketing boost that this status brings about.

In the reporting on this dispute, Amazon’s tactics have been described, among other pejoratives, as “bullying” and “strong-arming.”  Hachette after all is a relatively small publisher, and Amazon is the world’s largest book seller.  The European press has gone even further.  The Financial Times, for example, asks whether Amazon might be “using its dominance in one market – ereaders – to boost its dominance in another – ebooks.”

Before jumping on the band wagon condemning Amazon, however, some understanding of relevant facts and relevant law is in order.  To begin with, we might ask why Hachette and Amazon are negotiating an agreement at this time?  The answer is because Hachette, along with Apple and four other publishers (Simon & Schuster, Macmillan, Penguin, and HarperCollins) were accused by the Justice Department in 2012 of conspiring with each other to raise e-book prices in violation of Section 1 of the Sherman Antitrust Act, which outlaws anticompetitive agreements.  According to Justice Department documents, the alleged unlawful conspiracy consisted of creating a collective plan to force Amazon to increase its $9.99 price point for trade e-books.

As set out in Justice Department documents, Apple, in conjunction with its launch of the I-Pad in January 2010, desired simultaneously to enter the e-book retailing business, but was concerned about its ability to compete with Amazon on price.  In a plan largely designed by Apple but implemented by the five publishers, pressure was put on Amazon to agree to new distribution agreements by which the publishers would set the retail prices of trade e-books instead of Amazon.  Rather than buy the books at wholesale from the publishers, Amazon would act as a selling agent and simply receive a fixed commission on each sale.  Later, a sixth publisher, Random House, adopted this business model as well for many of its e-books, resulting in nearly 50% of all trade e-books distributed and sold under this agency system.  The almost immediate consequence was a significant increase in the price of trade e-books.

Hachette and each of the other four publishers subsequently reached a settlement with the Justice Department.  Apple went to trial and lost in an opinion filed in July 2013 finding Apple’s conduct illegal.  Although not admitting guilt, the settling defendants, including Hachette, agreed to abandon any control of retail pricing of e-books with any retailer, and to arrive independently at new distribution agreements with Amazon.  This then is the basis for the current negotiations between Hachette and Amazon. As is apparent, Hachette put itself into this position by dint of its prior concerted actions with its competitors.

If anything, to date Amazon’s presence in the retail e-book market, by resisting efforts on the part of publishers to raise prices, has been a boon to consumers. Implicit in the Justice Department’s litigation is a recognition of this fact. Further, as Apple, Google, and others, such as the publishers themselves, enter and expand into retail e-book sales, price competition will only increase.  The key is the ability to compete on price and not have price uniformly set by upstream anticompetitive agreements.

As for Amazon’s alleged dominance in e-book readers (the Kindle) and the alleged potential to leverage that dominance anticompetitively into e-book sales, few, if any, real world facts suggest that this presents a serious antitrust concern at this time, at least under U.S. law.  Section 2 of the Sherman Act goes after conduct that is both something other than “competition on the merits” and results in actual monopolization or a dangerous probability of that result. Simply having a large market share — even a very large one — is not a violation of Section 2.  Section 2 is concerned only with obtaining or maintaining a monopoly by anticompetitive means, i.e., a means that harms consumer welfare.

Although the Kindle device links to Amazon’s Kindle Store, it is possible to download many e-books obtained elsewhere. Some may first have to be converted to the Kindle format (Mobi), however. Calibre is a free download conversion program that will do this in a few minutes or less. Thus, there generally are no significant obstacles to using the Kindle to read e-books obtained elsewhere. Furthermore, tablets such as the I-Pad are e-readers as well. In fact, some might argue that they are superior to the Kindle insofar as they can display content in color such as illustrations or exhibits in art books. Hence, if anything, we are likely to see considerable erosion of Amazon’s share of e-reader sales in the future, eliminating any potential to use those sales as leverage in the retail e-book market.

Notwithstanding the above, given the global marketplace, it is useful to note that there are differences between U.S. antitrust law and competition law in other jurisdictions.  Firms operating globally must be cognizant of these differences.  As noted, the Sherman Act does not outlaw conduct by dominant firms unless that conduct is detrimental to competition itself, i.e., it results in harm to consumer welfare. Indeed, the U.S. Supreme Court has often stated that it is axiomatic that the U.S. antitrust laws are intended for the “protection of competition, not competitors.” By contrast, competition law within the European Union is more suspect of firms with dominant market shares, and may be more protective of competitors and suppliers. Conduct and practices that may not be unlawful under U.S. law may be unlawful under EU law.  Not only Amazon, but any global enterprise, should take care to be informed about and in compliance with all relevant law.

Theodore A. Gebhard advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department antitrust economist, Federal Trade Commission attorney, private practitioner, and economics professor.  Mr. Gebhard holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  Facts or circumstances described in the article may have changed by the time of posting. You can contact the author via email at theodore.gebhard@aol.com.