Vienna Woods Law & Economics

Blog focused on issues in law, economics, and public policy.

Europe’s Latest Antitrust Fine Treads on U.S. Economy, Sovereignty – By Asheesh Agarwal — October 14, 2025

Europe’s Latest Antitrust Fine Treads on U.S. Economy, Sovereignty – By Asheesh Agarwal

[Note: This post originally appeared at DC Journal on Oct. 13, 2025.]

The European Union is taking upon itself the authority to restructure the U.S. economy. In its latest move, the EU imposed a staggering $3.45 billion fine on Google for alleged antitrust violations and signaled that “only the divestment” of Google’s components would resolve the matter. In the meantime, France fined Google 325 million euros over privacy issues. These moves raise concerns about Europe’s motives, opacity, disregard for international comity, and broader agenda to target successful American companies.

For the United States, the moves raise the prospect that foreign regulators, motivated by protectionist impulses and budgetary constraints, may assume the authority to restructure innovative American companies.  As President Trump wrote in a message that should resonate across the Atlantic, “We cannot let this happen to brilliant and unprecedented American ingenuity.”

Europe’s latest fines are part of a pattern in which it deliberately uses exorbitant penalties to transfer wealth from American companies, workers and shareholders to European coffers. As outlined in an extensive study, the Europeans afford themselves “significant fining authority” and “maximum discretion” to impose billions of dollars in fines on American companies based on ambiguous statutes.  To date, the “fines against American companies have been orders of magnitude larger than those imposed on domestic firms.”

As Trump pointed out, Google has paid “$13 Billion Dollars in false claims … How crazy is that?”

Indeed, the fines’ size and opacity call into question whether U.S. firms in Europe can receive even a semblance of due process. Despite issuing a multibillion-dollar fine, the EU provided almost no clarity on its calculation, saying only that it “considered various elements,” such as “the duration and gravity of the infringement,” Google’s European ad turnover, and past fines on Google.  

The EU, however, never explained whether the fine correlated with actual consumer harm or higher prices. Fines of this magnitude, untethered to demonstrable harm, appear to transform enforcement actions into revenue-generating schemes. The EU’s lack of transparency undermines trust and raises questions about its commitment to due process.

Even more concerning is the EU’s desire to break apart Google as a remedy. Such an unprecedented move would shatter norms of international law enforcement comity. The United States, home to Google and its parent company, Alphabet, has a robust antitrust framework. 

Emphatically, it is not the place of Europe to dictate the structure of an American company, especially when U.S. courts and regulators are actively considering similar issues. The EU’s overreach infringes on U.S. sovereignty, creating a chaotic and fragmented regulatory environment.

Notably, the EU is imposing these drastic remedies for conduct that is, at worst, ambiguous from a competitive standpoint. The conduct at the heart of the EU’s case — so-called “self-preferencing” — is a common and often pro-competitive practice among vertically integrated companies. Retailers promote their private-label products, streaming platforms highlight their original content, and countless other businesses engage in similar practices. While it is true that a federal court has found Google’s practices problematic (a decision subject to appeal), any remedies should tie directly to consumer harm and consider that similar conduct has often been found to be pro-competitive.

Beyond the facts of this case, the fine raises broader concerns for the Western alliance and the race for global technological supremacy. Excessive fines and aggressive regulatory actions risk stifling innovation and investment while benefiting Chinese companies, which often operate with significant state support and fewer regulatory constraints. 

In his AI Action Plan, Trump declared that “it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance,” a goal that “requires the Federal government to create the conditions where private-sector-led innovation can flourish.”

How can American companies innovate to their fullest potential when foreign regulators threaten them with dismemberment and exorbitant multibillion-dollar fines? At a time when the global tech race is intensifying, the EU’s actions could suppress innovation and investment.

Benjamin Franklin’s famous political cartoon, Join or Die, highlighted the importance of American unity in the face of European incursions. Today, consistent with Trump’s position, U.S. policymakers and private enterprise must stand together in defending America’s economy and sovereignty from discriminatory and excessive regulatory actions.

 

* Asheesh Agarwal is the president of Agarwal Strategies. He previously served in senior roles at the U.S. Justice Department (DOJ) and the Federal Trade Commission (FTC). He wrote this for InsideSources.com.  See more about Mr. Agarwal here, including his other posts.

FTC Alumni Response to FTC/DOJ RFI on Serial Acquisitions — June 26, 2024

FTC Alumni Response to FTC/DOJ RFI on Serial Acquisitions

Posted by

Theodore A. Gebhard*

On May 23, 2024, the Department of Justice and the Federal Trade Commission announced that they were jointly launching an inquiry into the potential competitive effects of serial acquisitions and roll-up strategies. The inquiry will assess the potential antitrust liability arising from such acquisitions. The Agencies’ public announcement defines serial acquisitions and roll-up strategies as follows:

“Serial acquisitions and roll-ups are a form of corporate consolidation where a company becomes larger — and potentially dominant — by buying several smaller firms in the same or related sectors or industries.”

The link below is to a letter written by former federal antitrust enforcers to the DOJ and the FTC responding to the Agencies’ request for comments on their inquiry. The former enforcers urge the Agencies to conduct the inquiry in such a way so as to build confidence in its objectivity and comprehensiveness. I am a signatory on the letter.  I encourage readers of this post to read the letter in its entirety. TAG


FTC Alumni Comments on Serial Acquisitions


[Note: The alumni comments are cross-posted at Truth on the Market.]


* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

FTC Alumni Comments on Proposed Hart-Scott-Rodino Form — September 26, 2023

FTC Alumni Comments on Proposed Hart-Scott-Rodino Form

Posted by

Theodore A. Gebhard

The Biden Federal Trade Commission has proposed revisions to the Hart-Scott-Rodino reporting form required of all parties proposing to merge and/or proposing to acquire the assets of another company, whenever certain threshold metrics are present. In response to this proposal a number of former FTC officials, of which I am one, submitted comments to the Commission with our views on the proposed revisions.  The former officials suggest several ways by which the FTC could strengthen the evidentiary and legal foundations in support of the proposed revisions.  A link to the submission is below, and I encourage the reader of this post to read it in its entirety. TAG

FTC Alumni Comments on proposed HSR form  

[Note: The submission is also cross-posted at the International Center for Law & Economics and at Liberty & Markets.]

* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

FTC Alumni Comments on the Confidentiality of the Agency’s Investigations — April 13, 2023

FTC Alumni Comments on the Confidentiality of the Agency’s Investigations

Posted by

Theodore A. Gebhard*

The link below is to a letter written this month by former FTC officials expressing concern about possible lapses in the Agency’s integrity and fairness in keeping business information confidential during investigations. In several instances, FTC personnel may have leaked confidential information, or their analyses of confidential information, to the media about ongoing investigations. The former officials, of which I am one, urge the Commission to reassure the public, and to remind all agency personnel, that the Agency’s investigations will and must remain confidential. I encourage readers of this post to read the letter in its entirety. TAG

FTC Alumni Comments on Confidentiality and Due process

* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

FTC Alumni Comments on the Commission’s Proposed Non-Compete Clause Rule — March 21, 2023

FTC Alumni Comments on the Commission’s Proposed Non-Compete Clause Rule

Posted by

Theodore A. Gebhard*

The Federal Trade Commission has begun a rulemaking process with a stated goal of promulgating and implementing a Non-Compete Clause Rule that would prohibit employers from contractually conditioning hiring on an agreement that the employee will not render his or her services to a competing employer, should there come a time where the employee no longer works for the initial employer. In connection with the rulemaking procedure, several former FTC Officials, of which I am one, submitted comments to the Commission in which the Officials express a number of concerns with the rulemaking process and with the proposed rule’s potential impact on the FTC’s ability to fulfill its mission. A link to the submission is below, and I encourage readers of this post to read the submission in its entirely. TAG

FTC Alumni Comments on Non-Compete Proposed Rule

* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

Request for Investigation of the FTC’s Practice of Counting “Zombie Votes” — December 3, 2021

Request for Investigation of the FTC’s Practice of Counting “Zombie Votes”

Posted by

Theodore A. Gebhard*

Nearly one month ago, the publication, Politico, reported that the Federal Trade Commission held on to “as many as 20 votes that former Democratic Commissioner Rohit Chopra cast by email on Oct. 8—his last day at the agency—that remain active even after his departure.” The link below is to a letter written by several former FTC Officials and Antitrust Scholars to members of the House and Senate who oversee the FTC and to the Inspector General of the FTC. The former Officials and Scholars express concern that using the votes of Commissioners who have departed from their roles at the FTC and concealing it from the public raises serious problematic issues about transparency and accountability. The Officials seek an investigation to determine the following: 1.) the legal basis for this practice, beyond compliance with internal voting rules; 2.) whether the practice has previously been used, when it was used, and, specifically, if it has been used to break ties; and 3.) information relating to each of the underlying proposals, votes, and relevant motions as well as the FTC’s rationale for concealing these specific matters from public disclosure. I am a signatory on the letter. I encourage readers of this post to read the letter in its entirety. TAG

FTC Alumni Comments on Zombie Votes

* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

 
 
Antitrust Scholars and Former Federal Antitrust Enforcers’ Letter to the FTC Expressing Concerns about Altering Enforcement Principles — July 1, 2021

Antitrust Scholars and Former Federal Antitrust Enforcers’ Letter to the FTC Expressing Concerns about Altering Enforcement Principles

Posted by

Theodore A. Gebhard*

The Biden Federal Trade Commission has proposed modifying, or even possibly rescinding, its Statement of Enforcement Principles On Unfair Methods of Competition under Section 5 of the FTC Act. The link below is to a letter prepared by a number of antitrust scholars and former federal antitrust enforcers commenting on this proposal. The signatories express concern that the Commission will be considering a significant shift in enforcement policy and may go so far as to revoke the existing statement, which provides a bipartisan framework laying out widely agreed upon core principles regarding antitrust law and the Commission’s Section 5 enforcement. These principles include the promotion of consumer welfare and focusing enforcement on acts or practices that “must cause, or be likely to cause, harm to competition or the competitive process.” The signatories’ concern is that a rescission of the current statement could untether the Commission’s enforcement decisions from a focus on harms to consumers and the competitive process. Ashley Baker of the Alliance on Antitrust was the principal drafter of the letter. I am a signatory, and I encourage readers of this post to read the letter in its entirety. A link is below. It is also cross-posted at the Alliance on Antitrust website and on the Council for Citizens Against Government Waste website.  TAG

Coalition Comments on Rescission of Enforcement Principles

* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

Positive Legislative Antitrust Agenda for Congress and the New Biden Administration — March 5, 2021

Positive Legislative Antitrust Agenda for Congress and the New Biden Administration

Posted by

Theodore A. Gebhard*

The link below is to a letter prepared by a coalition of former federal antitrust enforcers and antitrust scholars, which was sent to the members of the Senate and House Committees that have jurisdiction over the Department of Justice’s and Federal Trade Commission’s antitrust missions. The letter sets out a Positive Legislative Agenda for the nation’s competition policy, which should also serve as guidance to the new Biden Administration antitrust enforcers. Ashley Baker of the Alliance on Antitrust was the principal drafter of the letter.  I am a signatory.  I encourage the reader of this post to read the letter in its entirety.  TAG

Antitrust-Positive-Agenda

* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

Supreme Court Rules in Tennessee Wine & Spirits Retailers Assn. v Thomas [previously, Blair] — June 26, 2019

Supreme Court Rules in Tennessee Wine & Spirits Retailers Assn. v Thomas [previously, Blair]

Posted by

Theodore A. Gebhard*

Today the U.S. Supreme Court handed down its decision in Tennessee Wine and Spirits Retailers Association v. Thomas, Executive Director of the Tennessee Alcohol Beverage Commission et al., 588 U.S. 504-57 (June 26, 2019).  In an earlier post on this site, I discussed this case in the context of the amicus brief (Law & Economics Scholars Brief) that five amici curiae, of which I was one, submitted to the Court in this matter.  Please see the earlier post for appropriate background on the case and the amicus brief.)

In an opinion written by Justice Alito, the Court found that “the predominant effect of [Tennessee’s] 2-year residency requirement [that applicants for licenses to sell alcoholic beverages at retail must satisfy] is simply to protect the Association’s members from out-of-state competition.”  588 U.S. at 543.  This finding is wholly consistent with the discussion set out in the Law & Economics Scholars Brief

Because the predominant effect of the durational residency requirement is to discriminate against out-of-state potential competitors, the Court held that the “provision violates the Commerce Clause and is not saved by [Sec 2 of] the Twenty-first Amendment,” as the scope of state regulatory powers under Sec. 2 does not extend to implementing such anticompetitive impediments.  Id. “Where the predominant effect of a law is protectionism, not the protection of public health or safety, it is not shielded by Sec. 2” 588 U.S. at 539-40.

Interestingly, Justice Gorsuch, joined by Justice Thomas, dissented, finding that because Sec. 2 of the 21st Amendment provides states with broad powers to regulate the sale of alcohol that they otherwise would not have respecting other goods and services, the explicit language of the Amendment provides sufficient scope to encompass the kind  of regulation at issue in this matter notwithstanding its effect on competition. The adopters of the 21st Amendment “left us with clear instructions that the free-trade rules this Court has devised for ‘cabbages and candlesticks’ should not be applied to alcohol.” 588 U.S. at 557. Justices Gorsuch and Thomas would accordingly find Tennessee’s durational residency requirement constitutional.

Read the entire opinion here.

* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

Using Stereotypes in Decision Making – By Stefan N. Hoffer — November 5, 2018

Using Stereotypes in Decision Making – By Stefan N. Hoffer

A stereotype has been defined as a “fixed general image or set of characteristics that a lot of people believe represent a particular type of person or thing.” https://www.collinsdictionary.com/english/stereotype    Or, as “something conforming to a fixed or general pattern; especially : a standardized mental picture that is held in common by members of a group and that represents an oversimplified opinion, prejudiced attitude, or uncritical judgment.” https://www.merriam-webster.com/dictionary/stereotype  The implication is that the use of stereotypes in making decisions concerning populations or individual members of these populations is improper and should be avoided. [See for example, https://www.reference.com/world-view/stereotypes-harmful-dadca8d95b0fc67c or https://www.aauw.org/2014/08/13/why-stereotypes-are-bad] Yet stereotypes have long been and continue to be used to make decisions.

Focusing primarily on a simple summary measure, the following examines whether stereotypes can be used to make rational decisions and if so, under what circumstances. The essay considers both populations and individuals, taking into account that decision making requires information and that developing information is not costless. It is assumed that the information itself is accurate.

Stereotypes are summary measures. They may be very complex and consider an array of many characteristics. Examples would include an index of several factors such as a credit score—essentially a weighted average—or even a proprietary brand name and the quality it represents. Or stereotypes can be very simple and consider only one characteristic.

A very simple, perhaps the simplest, summary measure or stereotype for any particular population is its population mean with respect to one particular characteristic. A population mean provides a centrist measure of the location of the distribution of population members with respect to this characteristic. The variance of this population provides a measure as to how dispersed the individual members of the population are around its mean. Both measures provide significant information about the population. But distributions with large variances may provide only limited information about individual population members, creating a “fog of uncertainty.”

When making decisions about all members of a population taken as a whole, knowledge of the mean alone may be enough to make rational decisions irrespective of the variance. For example, consider auto insurance pricing where all or virtually all population members are required to carry insurance.  By setting a premium equal to the average, or expected, claim value per population member (plus administrative costs) insurance companies can provide risk sharing to all members of a population without taking into account variation among population members. This occurs because a characteristic of the mean is that the sum of all deviations from it, both plus and minus, will net to zero. [Harold W. Guthrie, Statistical Methods in Economics, Richard D. Irwin, Inc., Homewood Illinois, 1966. p. 34.] That is, those with less than average claims exactly offset those with greater than average claims.

But, what about making decisions that concern specific members of a population, not the entire population? Will relying on summary measures yield accurate decisions? Here the dispersion of the individuals around the mean, measured by the variance, comes into play. If the variance is small, a population summary measure can be expected to yield a relatively close approximation to each individual’s characteristic value. As the variance increases in size, however, the ability of a summary measure to predict the characteristic value of any particular individual will decline, as large variances have the potential to yield numerous, large errors with their associated costs Can rational decisions continue to be made in such situations?

Unlike the auto insurance example above where coverage is nearly universal and specific companies have large numbers of policy holders, most decisionmakers do not make decisions concerning most members of a population. Rather, their focus is on a subset of its members. For example, potential employers may hire a few members from a population or they may hire many members from it, but they do not hire everyone. In effect, each decisionmaker draws a random sample of a given size from the population when he or she makes new hires.

The distribution of the means of all the possible samples—subsets—of this size around the population mean (our summary measure) is known as the sampling distribution of the means. Its dispersion is measured by the population standard deviation divided by the square root of the sample size, or standard error. It will be larger when there is greater variation in the population but it also becomes smaller as the sample size increases. As a result, the accuracy of using any particular mean summary measure with any given level of variation depends on the frequency with which decisions are made. That is, the more decisions made the greater the sample size will be.

Some entities make decisions only infrequently. As a result, basing decisions on a summary measure such as a mean where there is significant variation in the population may not result in good outcomes for the infrequent decisionmaker. This situation arises because there are not enough decisions to make it likely that there will be enough good and bad outcomes to offset each other.  With higher population variances, the infrequent decisionmaker must choose between continuing to accept the risks and costs of a potentially bad outcome, abstain from making decisions altogether using the summary measure, or adopt a different approach. Unlike frequent decision makers discussed below, there are limited incentives for infrequent decision makers to incur the costs to develop improved measures because they cannot spread the costs across enough decisions to make it worthwhile.

As an example, skill and ability levels of recent college graduates vary significantly. The summary measure of having earned a degree is only a limited predictor of quality. Infrequent potential employers have the option of using this measure and risking a poor new hire, avoiding hiring altogether, or adopting a different approach such as recruiting only from premier, top-tier schools. The selection and training processes for people from these schools can be expected to produce graduates with higher and more uniform mean abilities. But hiring recruits with this pedigree quality measure may be more costly.

The situation can be expected to be different for frequent decisionmakers. Because they make numerous decisions—draw a larger sample—they can rely on summary measures with larger population variances and expect average outcomes to be close to the population mean much of the time as bad choices are offset by good ones. (The standard error of the larger sample size is smaller than for infrequent decisionmakers.)   In the example above, recruiters can hire college graduates with the anticipation that the ability levels of their new hires as a group will very likely approximate the mean of all graduates. They do not need to be preoccupied with getting bad apples because overall group performance is predictable and stable.

Although frequent decisionmakers can expect to make decisions with relatively stable overall outcomes based on summary measures, economic incentives may encourage them to improve their decision making by developing improved measures. Not only will improved measures allow the decisionmakers to select better members of the population, the costs of doing so can be spread across many decisions. An improved measure will give any particular decisionmaker a competitive advantage, at least until competitors are able to imitate the better measure or develop their own better measures.

Returning to the college graduate example, because some graduates of non-top tier schools are of similar quality to graduates of top-tier schools, an incentive exists for potential employers to develop measures to identify these better graduates. If this can be accomplished at an aggregate cost that is less than the premium salary margin that may have to be paid to top-tier-school graduates, firms can be expected to make the investment required to develop the more precise measure. In so doing, these firms can reduce their labor costs and may enjoy a competitive advantage. As noted, frequent decisionmakers will have a greater incentive to undertake such investments because development costs of a superior measure can be spread over a large number of new hires.

It should be acknowledged that making decisions based on a summary measure where there is significant population variation may not be fair to the individuals being evaluated. In the college graduate example, those individuals with desirable characteristics better than the mean will not be recognized as better while those with worse characteristics will be protected by the fog of uncertainty. To the extent that large population variances cause infrequent decisionmakers to abstain from making decisions altogether, all individuals may be potentially harmed.

If the alternative approach of hiring only from top tier schools is adopted, it may reduce the risk of poor hires but will also disadvantage higher quality candidates from second-tier schools. Should frequent decisionmakers adopt improved measures to identify better graduates from second-tier schools, these graduates likely will initially be underpaid. But their salaries can be expected to increase as other employers imitate and adopt these improved measures. Concurrently, less capable graduates of second-tier schools will no longer be shielded and can be expected to experience a relative decline in salaries.

Although infrequent decisionmakers have limited incentives to develop better measures, third parties may nonetheless do so. Especially in regard to high variance populations, the creation of improved summary measures, although too costly to be developed by individual decisionmakers, may still be undertaken by third parties if they can sell the measures to a sufficient number of consumers.

In the case of hiring, employment agencies often serve this function for infrequent decisionmakers. The agencies incur the costs of identifying the better members of a large variance population and spread this cost across placing numerous hires with smaller firms. Agencies can be compensated either by employers or job seekers or both.

With respect to consumer products, consider the marketing of used cars. The summary measure—stereotype—is that used cars are inferior. In fact, some are of very high quality.   Some car dealers have identified higher quality used cars and market them as “certified” merchandise guaranteed to meet certain standards. And the chain used car dealership CarMax has developed a reputation of offering better used vehicles with limited warranties.

In summary, decisionmakers can make rational decisions based on stereotypes under a wide variety of circumstances. Exceptions may occur when decisions are made infrequently or the decisionmaker would incur extensive losses should a poor outcome occur. In such cases decisions are either not made or a substitute, typically higher cost, approach adopted.

Decisions based on stereotypes also may not be considered fair to those individuals affected by the decisions. However, natural economic incentives exist to encourage development and adoption of better decision-making criteria. Indeed, a significant business opportunity exists for anyone who can develop improved selection measures where population variances are large. Although adopting better measures will in general improve decision outcomes for decisionmakers and for better than average members of stereotyped groups, inferior members of these groups can be expected to fare worse as they are no longer shielded by the fog of uncertainty.

Mr. Hoffer is a transportation economist, formerly with the Federal Aviation Administration.  Contact him at snhoffer@aol.com.  See the Contributors page for more about Mr. Hoffer.

The Enduring Legacy of Henry Manne: A Review of the 2016 Law & Economics Center Conference – By Theodore A. Gebhard — January 26, 2016

The Enduring Legacy of Henry Manne: A Review of the 2016 Law & Economics Center Conference – By Theodore A. Gebhard

On Friday, January 22, I attended the Fifth Annual Henry G. Manne Law & Economics Conference, which was sponsored by the Law & Economics Center at George Mason University.  The Conference was held in conjunction with the Twelfth Annual Symposium of the Journal of Law, Economics, & Policy, a publication of the GMU School of Law.  This year’s Conference was entitled, “The Enduring Legacy of Henry G. Manne,” and featured three panels of academic experts, all of whose research draws substantially on the work of the late Henry Manne.  Manne was the long time Dean of the Law School, and a trailblazing scholar of corporate governance and corporate finance.

Regrettably, owing to inclement weather, the day’s program had to be truncated, including dropping the scheduled Key Note luncheon speech by former Securities and Exchange Commission commissioner, Kathleen Casey.  Even under the shortened time frame, however, the panel discussions were thorough and highly informative.

Panel 1:

The first panel focused on Manne’s seminal 1965 Journal of Political Economy article, “Mergers and the Market for Corporate Control.”  Speakers included GMU business professor, Bernard Sharfman, and University of Chicago Law professor, Todd Henderson.

In the JPE article, Manne developed the then-novel insight that when a corporation is afflicted with inefficiencies owing to poor management, an incentive is created for others to take control of the corporation, eliminate the managerial inefficiencies, and be rewarded with an increase in share price.  What this means is that there is a functioning market for corporate control.

Drawing on this insight, Professor Sharfman considered whether activist investors might be able to perform the same function, specifically activist hedge funds.  One key difference between activist investors and take-over investors is that the former typically are not able to obtain a controlling interest in a corporation.  Although the interest can be significant, it falls short of the authority to dictate managerial changes.  Therefore, when activist investors see managerial inefficiencies, they must rely principally on persuasion to influence corrective action.

Corporate boards, however, often, if not most of the time, resist this activism.  In some instances, the boards might go so far as to sue in court for relief.  When this occurs, the courts are bound by the “business judgment rule,” which provides for deference to the decisions of corporate boards.  Sharfman contends that, although the business judgement rule is based on solid grounds and usually works well as a legal rule, it fails under the circumstances just described, i.e., when there are managerial inefficiencies but activist investors are unable to obtain a controlling interest in the company.  Sharfman concludes, therefore, that it might be time for the courts to carve out, albeit carefully, an exception to the business judgment rule in cases where the evidence points to no plausible business reason to reject the activists’ position.  In this circumstance, a court can find that the board’s resistance likely owes to no more than an attempt to protect an entrenched management.

Building on Manne’s insight of the existence of a “market” for corporate control, Professor Henderson considered the possibility of such diverse hypothetical markets as (1) markets for corporate board services, (2) markets for paternalism and altruism, and (3) markets for trust.  In the first instance, Henderson posited the possibility that shareholders simply contract out board services rather than having a board solely dedicated to one company.   So, for example, persons with requisite expertise could organize into select board-size groups and compete with other such groups to offer board services to the shareholders of any number of separate corporations.

In the second instance, Henderson, noting the growing modern viewpoint that companies have paternalistic obligations toward stakeholders that go beyond shareholder interests, suggested that the emergence of a competitive market to meet such obligations would likely be superior to relying on evolving government mandates.  Competitive markets, for example, would avoid delivering “one size fits all” services and, in so doing, be better able to delineate beneficiary groups on the basis of their specific needs, i.e., needs common within a group but diverse across groups.  Inefficient cross-subsidization could thus be mitigated.  In this same vein, Henderson suggested the possibility of a market for the delivery of altruistic services.  He noted that the public is increasingly demanding that corporations, governments, and non-profits engage in activities deemed to be socially desirable.  As with the provision of paternalism, competitively supplied altruism whereby companies, governments, and non-profits comprise the incumbent players would yield the positive attributes of competition.  These would include the emergence of alternative mixes of altruistic services tailored to the specific needs of beneficiaries and efficient, low cost production and delivery of those services.

In the third instance, Henderson posited the idea of a market for trust.  Here he offered the example of the ride-sharing company, Uber.  Henderson suggested that Uber not only competes with traditional taxis, but, perhaps more importantly, competes with local taxi commissions.  Taxi commissions exist to assure the riding public that it will be safe when hiring a taxi.  Toward that end, taxis are typically required to have a picture of the driver and an identifying number on display, be in a well maintained condition, and have certain other safety features.  All of these things are intended to generate a level of trust that a ride will be safe and uneventful.  According to Henderson, Uber’s challenge is to secure a similar level of trust among its potential customers.  New companies shaking up other traditional service industries face the same challenge.  Henderson concludes, therefore, that these situations open up entrepreneurial opportunities to supply “trust.”  Although Henderson did not use the example of UL certification, that analogy came to mind.  So, for example, there might be a private UL-type entity in the business of certifying that ride sharing (or any other new service company) is trustworthy.

In commenting on Panel 1, Bruce Kobayashi, a GMU law professor and former Justice Department antitrust economist, offered one of the more interesting observations of the day.  Professor Kobayashi reminded the audience that Manne’s concern in his JPE article was principally directed at antitrust enforcement, not corporate law.  In particular, Manne argued that the elimination of managerial inefficiency should rightly be counted as a favorable factor in an antitrust analysis of a merger.  In fact, however, although the DOJ/FTC Horizontal Merger Guidelines allow for cognizable, merger-specific efficiencies to be incorporated into the analysis of net competitive effects, the agencies historically only consider production and distribution cost savings, not the likelihood of gains to be had from jettisoning bad management.  Kobayashi suggested that this gap in the analysis may be due simply to the compartmentalization of economists among individual specialties.  For example, in his experience, he rarely sees industrial organization (antitrust) economists interacting professionally with economists who study corporate governance.  Thus, because Manne’s article, through the years, has come to be classified (incorrectly) as solely a corporate law article, its insights have unjustifiably escaped the attention of antitrust enforcers.

Panel 2:

Panel 2 focused on Henry Manne’s seminal insights about “insider trading.”  Manne was the first to observe that, notwithstanding the instinctive negative reaction of many, if not most, people to insider trading (“It’s just not right!”), the practice actually has beneficial effects in terms of economic efficiency.  By more quickly incorporating new information about a business’s prospects into share price, insider trading can accelerate the movement of that price toward a market clearing level, which signals a truer value of a company and thus enhances allocative efficiency in capital flows.

The two principal speakers on Panel 2 were Kenneth Rosen, a law professor at the University of Alabama and John Anderson, a professor at the Mississippi College School of Law.  I will limit my comments to Anderson.

In addition to being a lawyer, Professor Anderson is a philosopher by training, holding a Ph.D. in that subject.  He began his presentation by noting that ethical claims are often vague in a way that economic analysis, given its focus on efficiency, is not.  Although in any empirical study, there can be problems with finding good data and there can be measurement difficulties, the analytical framework of economics rests on an objective standard.  Either a given behavior increases efficiency, reduces efficiency, or is benign toward efficiency.  Anderson also observed that ethics merely sets goals, while economic analysis determines the best (i.e., most efficient) means to achieve those goals.

Putting these distinctions into the context of insider trading, Anderson finds that insider trading laws and enforcement of those laws are likely overreaching.  The current statutory scheme rests largely on ethical notions, namely the issue of unfairness (“It’s just not right!”).   As such, it rests on vague standards.  The result is a loss of economic efficiency and costly over-compliance with the laws.  In the end, not only are shareholders hurt (e.g., by costly compliance and litigation), but the larger investing public is also harmed owing to slower price adjustments.  Anderson made the point that, although acts of greed may be bad for individual character, such acts are not necessarily bad for the entire community.

In concluding his presentation, Anderson proposed that some form of licensing of insider trading might best accommodate the competing ethical and efficiency goals.  Under a licensing system, insider trading could be permissible in certain circumstances, but under full transparency.

Panel 3:

Panel 3 considered the effects of required disclosures under federal security laws.  The two principal speakers were Houman Shadab of New York Law School and Brian Mannix of George Washington University.

Despite being a novice in the subject matter of the day, I felt that I followed the discussions of the first two panels reasonably well.  Panel 3, however, was difficult for me, as the speakers made frequent references to statutory provisions and SEC interpretations of relevant security law, with which I have no familiarity.  Nonetheless, a portion of the discussion intrigued me.

In particular, Professor Mannix discussed the issue of high frequency trading (HFT), a hot topic just now because of Michael Lewis’s recent book, Flash Boys.  As I understand it, computer technology makes it possible for trades to take place within microseconds.  With evermore sophisticated algorithms, traders can attempt to beat each other to the punch and arbitrage gains even at incredibly small price differences.

Of particular interest to regulators is the likelihood of a tradeoff inherent in HFT that determines net efficiency effects.  On the one hand, HFT has the potential to enhance efficiency by accelerating the movement of a share price to its equilibrium.  Given that this movement is tiny and occurs within a microsecond, however, this efficiency gain may not be significant.  Indeed, the efficiency gains, if any, are likely to be very slight.

On the other hand, a lot of costly effort goes into developing and deploying HFT algorithms.  Yet, much of the payoff may be no more than a rearrangement of the way the arbitrage pie is sliced rather than any enlargement of the pie.  If so, the costs incurred to win a larger slice of the pie, costs without attendant social wealth creation, likely exceed any efficiency gain.  It may make sense then for regulators to create some impediments to HFT.  Toward this end, Mannix offered some possible ways to make HFT less desirable to its practitioners.  The most intriguing was to put into place technology that would randomly disrupt HFT trades.  Key to profiting substantially from HFT is the need to trade in very large share volumes because the price differences over which the arbitrage takes place are so small.  Random disruptions would make such big bets riskier.*

Final Thoughts:

All in all, I found each of panels to be highly informative.  The cast of speakers was well selected, and each presentation was well made.  Significantly, each panel did a very good job of tying its discussion to Henry Manne’s work and influence.  In addition to learning a great deal about current hot issues in corporate law and how economic analysis informs those issues, conference attendees surely left with an even higher appreciation of Henry Manne.

On the logistical front, in light of last minute adjustments necessitated by weather conditions, the organizers pulled off the conference flawlessly.  A high standard was set that will be difficult for the organizers of next year’s offering to surpass.

Finally, on a personal note, Henry Manne was my law school Dean at GMU and also a neighbor for many years in my condominium.  I remember Henry most, however, because of the dramatic impact that his unique curriculum at GMU had on my intellectual development.  That curriculum, which emphasized the application of economic analysis to the law, literally changed the way I think about the world.  For this reason, I was especially pleased that each of the speakers at the Conference took time to comment on the influence that Manne had on them.  Some had never met Manne in person, but nonetheless were deeply influenced by his scholarly work.  Others who knew Manne more intimately related some wonderful anecdotes.  Those alone would have made the day worthwhile.

Notes:

* For a further and more detailed explanation of the tradeoffs inherent in HFT, readers are encouraged to see a blog post by Todd Zywicki, the Executive Director of the LEC, which appeared here.

Theodore A. Gebhard is  a law & economics consultant residing in Arlington, Virginia.  See the Contributors page for more about Mr. Gebhard.  Contact him at theodore.gebhard@aol.com.  For more about Henry Manne, see the several tributes to him upon his passing on David Henderson’s blog, including one by Mr. Gebhard.

Revenue Generation from Law Enforcement—Unintended Consequences? – By Stefan N. Hoffer — January 24, 2016

Revenue Generation from Law Enforcement—Unintended Consequences? – By Stefan N. Hoffer

Should law enforcement be conducted with an objective of maximizing public revenues? Should police departments be allowed to retain revenues they generate through fines and forfeitures?  An argument might be advanced that pursuing an objective of revenue generation will encourage law enforcement to enforce the law more effectively, particularly if revenues generated are returned to law enforcement agencies.  This essay argues that such policies may have surprising and unintended results in many common, everyday situations.

From time to time there are reports in the media of law enforcement seizing property using asset forfeiture laws. The crimes involved are typically serious, and convicted violators are subject to severe penalties including large fines. These actions occur in situations where it is difficult and expensive to identify offenders correctly given the large pool of un-identified offenders.  Because of the difficulty and expense involved, pursuing an objective of maximizing revenues and then recycling them back into enforcement activity would create incentives for enhanced enforcement efforts against a seemingly inexhaustible pool of offenders.

Although this result may be true under circumstances where the pool of offenders remains large, it would be too easy to conclude that attempting to maximize revenues will universally result in improved enforcement. Instead, let us consider situations where it is easy to identify and fine most offenders correctly.  In such situations, it may well be that pursuing a revenue objective can lead to routine, systematic under enforcement of everyday laws—for example, traffic regulations.

To see how this might occur, consider a nobleman who is the owner of a hunting preserve. The nobleman knows that to ensure a perpetual supply of game, he must not over-hunt the preserve.  If he does, the volume of game will decline and in the extreme become non-existent.  The “wise” nobleman will only engage in limited hunting so as to ensure a continuous stream of game in perpetuity.

By analogy, a traffic intersection controlled by signals or a road with high occupancy vehicle (HOV) lanes can be thought of as a hunting preserve. If law enforcement seeks to eliminate red light running or HOV violators, it can do so by rigorous enforcement—the violators are hunted to extinction.  Once the consideration of revenue generation is introduced, however, there becomes a strong incentive to under-enforce the law.  The rationale is simple:  if you enforce to the extent that there are few violators, there will be little revenue.

More specifically, if revenue maximization becomes a significant consideration, there will be a level of enforcement short of complete enforcement that will maximize revenue. Enforcing more rigorously, say more days per month, will, other things constant, generate more revenue.  But, other things are not constant.  As the number of enforcement days grows, the number of violators on any one day to be caught will decline—people will learn that if they commit violations they will likely be caught.  At some point, the decline in violators will just offset the gain from enforcing one more day per month.

Casual observation lends support for the notion that revenue maximization can lead to under enforcement:

Interstate 66 HOV Lanes Inside the Washington, D.C. Beltway.

The segment of I66 between Washington D.C. and the beltway around the city has HOV lanes. Vehicles using these lanes during commuting hours are required to have a minimum number of occupants.  Those that do not are subject to a fine.  Because access to I66 is controlled, enforcement is straight forward.  Police stationed at the exits can easily stop cars that do not have the minimum number of riders and issue tickets—and they do, but not on all days or even many days.  As a consequence, violators are numerous and road congestion and delay more extensive that it would be if most violators were eliminated.  This raises the question of why is there not more aggressive enforcement?  A typical governmental response is that limited resources do not allow it.  But police departments, like other government service providers, do not staff for the mean demand for their services, but rather for a certain percentile of the peak.*  This strategy allows the police to meet most, but not all, randomly occurring emergencies.  It also means that most of the time the police department will have excess staff on duty.  So, why cannot these extra staff be utilized, absent an emergency, for duties such as enforcing HOV regulations?  As suggested, a reason consistent with observed behavior is that more revenue will be generated by enforcing such regulations only sporadically, thus ensuring that there are plenty of violators to be fined when enforcement does take place.

Red Light Cameras in Central Florida.

Several small and medium-sized cities in central Florida have installed “red light cameras” to ticket motorists that run red lights. The cameras are typically provided by a private contractor who installs and operates them in cooperation with law enforcement.  The cities share the revenue that the cameras generate with the private contractor, subject to a minimum guarantee that the cities pay the contractor irrespective of revenues collected.  Initially these cameras made most jurisdictions significant amounts of money.  But, over a period of time, they also drastically reduced the number of red light runners.  Revenues from the cameras precipitously declined, in some cases to the point that some local jurisdictions were losing money on them after meeting their contractual guarantee to the commercial provider.  As a result, some jurisdictions have not renewed their contracts and are removing the cameras, sometimes even after the private contractor has offered to reduce substantially the minimum payment guarantee. Other jurisdictions have accepted the idea that the cameras improve public safety and that this improved safety comes at a cost—what they must pay the private contractor over and above revenues generated so as to meet the payment guarantees.

As outlined above, a revenue maximizing approach to law enforcement can have unintended and surprising consequences. In situations where it is difficult to identify and catch offenders, it can provide resources and incentives for aggressive enforcement.  In other situations, where potential offenders can be easily and accurately identified and caught, focusing on revenue maximization has the potential to lead to deliberate, significant under enforcement.

*  The Federal Aviation Administration, for example, uses this model for staffing air traffic controllers.  See the FAA’s document, A Plan for the Future: 10-Year Strategy for the Air Traffic Control Workforce, at page 20, available here.

Stefan N. Hoffer is a transportation economist, formerly with the Federal Aviation Administration. His areas of specialization include benefit-cost analysis and valuation of non-market traded items.  See the Contributors page for more about Mr. Hoffer.  Contact him at snhoffer@aol.com.

 

Forecasting Trends in Highly Complex Systems: A Case for Humility – By Theodore A. Gebhard — June 20, 2015

Forecasting Trends in Highly Complex Systems: A Case for Humility – By Theodore A. Gebhard

One can readily cite examples of gross inaccuracies in government macroeconomic forecasting.  Some of these inaccurate forecasts have been critical to policy formation that ultimately produced unintended and undesirable results.  (See, e.g., Professor Edward Lazear, “Government Forecasters Might as Well Use a Ouija Board,” Wall Street Journal, Oct. 16, 2014)  Likewise, the accuracy of forecasts of long-term global warming is coming under increasing scrutiny, at least among some climate scientists.  Second-looks are suggesting that climate science is anything but “settled.” (See, e.g., Dr. Steven Koonin, “Climate Science and Interpreting Very Complex Systems,” Wall Street Journal, Sept. 20, 2014)  Indeed, there are legitimate concerns about the ability to forecast directions in the macro-economy or long-term climate change reliably.  These concerns, in turn, argue for government officials, political leaders, and others to exercise a degree of humility when calling for urgent government action in either of these areas.  Without such humility, there is the risk of jumping into long-term policy commitments that may in the end prove to be substantially more costly than beneficial.

A common factor in macroeconomic and long-term climate forecasting is that both deal with highly complex systems.   When modeling such systems, attempts to capture all of the important variables believed to have a significant explanatory effect on the forecast prove to be incredibly difficult, if not entirely a fool’s errand.  Not only are there are many known candidates, there are likely many more unknown candidates.  In addition, specifying functional forms that accurately represent the relationships between the explanatory variables is similarly elusive.  Simple approximations based on theory are probably the best that can be achieved.  Failure to solve these problems — omitting important explanatory variables and incorrect functional forms – will seriously confound the statistical reliability of the estimated coefficients and, hence, any forecasts made from those estimates.

Inherent in macroeconomic forecasting is an additional complication.  Unlike models of the physical world where the data are insentient and relationships among variables are fixed in nature, computer models of the economy depend on data samples generated by motivated human action and relationships among variables that are anything but fixed over time.  Human beings have preferences, consumption patterns, and levels of risk acceptance that regularly change.  This constant change makes coefficient estimates derived from historical data prone to being highly unsound bases on which to forecast the future.  Moreover, there is little hope for improved reliability over time so long as human beings remain sentient actors.

By contrast, models of the physical world, such as climate science models, rely on unmotivated data and relationships among variables that are fixed in nature.  Unlike human beings, carbon dioxide molecules do not have changing tastes or preferences.  At least in principle, as climate science advances over time with better data quality, better identification of explanatory variables, and better understanding of the relationships among those variables, the forecasting accuracy of climate change models should improve.   Notwithstanding this promise, however, long-term climate forecasts remain problematic at present.  (See Koonin article linked above.)

Given the difficulty of modeling highly complex systems, it would seem that recent statements by some of our political, economic, and even religious leaders are overwrought.  President Obama and Pope Francis, for example, have claimed that climate change is among mankind’s most pressing problems.  (See here and here.)  They arrived at their views by dint of forecasts that predict significant climate change owing to human activity.  Each has urged that developed nations take dramatic steps to alter their energy mixes.  Similarly, the world’s central bankers, such as those at the Federal Reserve, the European Central Bank, the Bank of Japan, and the International Monetary Fund regularly claim that their historically aggressive policies in the aftermath of the 2008 financial crisis are well grounded in what their elaborate computer models generate and, hence, are necessary and proper for the times.  Therefore, any attempts to modify the independence of these institutions to pursue those policies should be resisted, notwithstanding that the final outcome of these historic and unprecedented policies is yet unknown.

It is simply not possible, however, to have much confidence in any of these claims.   The macroeconomic and climate systems are too complex to be captured well in any computer model, and forecasts derived from such models therefore are highly suspect.  At the least, a prudent level of humility and a considerable degree of caution are in order among government planners, certainly before they pursue policies that risk irreversible unintended, and potentially very costly, consequences.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.

Is Economics a Science? – By Theodore A. Gebhard — May 15, 2015

Is Economics a Science? – By Theodore A. Gebhard

The great 20th Century philosopher of science, Karl Popper, famously defined a scientific question as one that can be framed as a falsifiable hypothesis.  Economics cannot satisfy that criterion.  No matter the mathematical rigor and internal logic of any theoretical proposition in economics, empirically testing it by means of econometrics necessarily requires that the regression equations contain stochastic elements to account for the complexity that characterizes the real world economy.  Specifically, the stochastic component accounts for all of the innumerable unknown and unmeasurable factors that cannot be precisely identified but nonetheless influence the economic variable being studied or forecasted.

What this means is that economists need never concede that a theory is wrong when their predictions fail to materialize.  There is always the ready excuse that the erroneous predictions were the fault of “noise” in the data, i.e., the stochastic component, not the theory itself.  It is hardly surprising then that economic theories almost never die and, even if they lie dormant for a while, find new life whenever proponents see opportunities to resurrect their pet views.  Since the 2008 financial crisis, even Nobel Prize winners can be seen dueling over macroeconomic policy while drawing on theories long thought to be buried.

A further consequence of the inability to falsify an economic theory is that economics orthodoxy is likely to survive indefinitely irrespective of its inability to generate reliable predictions on a consistent basis.  As Thomas Kuhn, another notable 20th Century philosopher of science, observed, scientific orthodoxy periodically undergoes revolutionary change whenever a critical mass of real world phenomena can no longer be explained by that orthodoxy.  The old orthodoxy must give way, and a new orthodoxy emerges.  Physics, for example, has undergone several such periodic revolutions.

It is clear, however, that, because economists never have to admit error in their pet theories, economics is not subject to a Kuhnian revolution.  Although there is much reason to believe that such a revolution is well overdue in economics, graduate student training in core neoclassical theory persists and is likely to persist for the foreseeable future, notwithstanding its failure to predict the events of 2008.  There are simply too few internal pressures to change the established paradigm.

All of this is of little consequence if mainstream economists simply talk to one another or publish their econometric estimates in academic journals merely as a means to obtain promotion and tenure.  The problem, however, is that the cachet of a Nobel Prize in Economic Science and the illusion of scientific method permit practitioners to market their pet ideological values as the product of science and to insert themselves into policy-making as expert advisors.  Significantly in this regard, econometric modeling is no longer chiefly confined to generating macroeconomic forecasts.  Increasingly, econometric forecasts are used as inputs into microeconomic policy-making affecting specific markets or groups and even are introduced as evidence in courtrooms where specific individual litigants have much at stake.  However, most policy-makers — let alone judges, lawyers, and other lay consumers of those forecasts — are not well-equipped to evaluate their reliability or to assign appropriate weight to them.  This situation creates the risk that value-laden theories and unreliable econometric predictions play a larger role in microeconomic policy-making, just as in macroeconomic policy-making, than can be justified by purported “scientific” foundation.

To be sure, economic theories can be immensely valuable in focusing one’s thinking about the economic world.  As Friedrich Hayek taught us, however, although good economics can say a lot about tendencies among economic variables (an important achievement), economics cannot do much more.  As such, the naive pursuit of precision by means of econometric modeling —  especially as applied to public policy — is fraught with danger and can only deepen well-deserved public skepticism about economists and economics.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.

Economics and Transparency in Antitrust Policy – By Theodore A. Gebhard — April 28, 2015

Economics and Transparency in Antitrust Policy – By Theodore A. Gebhard

A significant turning point in antitrust thinking began in the mid-1970s with the formal integration of microeconomic analysis into both antitrust policy and antitrust litigation.  At that time, the Department of Justice and the Federal Trade Commission dramatically expanded their in-house economics staffs and ever since have increasingly relied on those staffs for strategic advice as well as technical analysis in policy and litigation.

For the most part, this integration of economics into antitrust thinking has been highly positive.  It has been instrumental to ensuring that the antitrust laws focus on what they are intended to do – promote consumer welfare.   Forty years later, however, economics has gone beyond its role as the intellectual undergirding of antitrust policy.  Today, no litigant tries an antitrust case without utilizing one or more economists as expert witnesses, as economic analysis has become the dominant evidence in antitrust enforcement.  In this regard, the pendulum may have swung too far.

Prior to the mid-1970s, economists, though creating a sizable academic literature, were largely absent in setting antitrust policy and rarely participated in litigation.  The result was that, for much of the history of antitrust, the enforcement agencies and the courts often condemned business practices that intuitively looked bad, but without much further consideration.  Good economics, however, is sometimes counter-intuitive.  Many of these older decisions did more to protect competitors from legitimate competition than protect competition itself.  Integrating sound economic thinking into enforcement policy was thus an important corrective.

Economic thinking has been most impactful on antitrust policy in two areas: unilateral business conduct and horizontal mergers.  Older antitrust thinking often conflated protecting competitors with protecting competition.  The most devastating critique of this confusion came from the so-called “Chicago School” of economics, and manifested itself to the larger antitrust legal community through Robert Bork’s seminal 1978 book, The Antitrust Paradox.  It is hard to exaggerate the impact that this book had on enforcement policy and on the courts.  Today, it is rare that unilateral conduct is challenged successfully, the courts having placed a de facto presumption of legality on such conduct and a heavy burden on plaintiffs to show otherwise.

Horizontal merger policy likewise had a checkered history prior to the mid-1970s.  Basically, any merger that increased market concentration, even if only slightly, was considered bad.  The courts by and large rubber-stamped this view.  This rigid thinking began to change, however, with the expanded roles of the economists at the DOJ and FTC.  The economists pointed out that, although change in market concentration is important, it is not dispositive in assessing whether a merger is anticompetitive.  Other factors must be considered such as the incentives for outside firms to divert existing capacity into the relevant market, the degree to which there are barriers to the entry of new capacity, the potential for the merger to create efficiencies, and the ability of post-merger firms to coordinate pricing.  Consideration of each of these economic factors was eventually formalized in merger guidelines issued in 1982 by the Reagan Administration’s DOJ.  These guidelines were joined by the FTC ten years later and amended to consider mergers that might be anticompetitive regardless of firms’ ability to coordinate prices.

Each of these developments led to far more sensible antitrust policy over the past four decades.  Today, however, economic thinking no longer merely provides broad policy guidance but, in the form of highly sophisticated statistical modeling, increasingly serves to be the principal evidence in specific cases.  Here, policy-making may now be exceeding the limits of economic science.  Friedrich Hayek famously described the difference between science and scientism, noting the pretentiousness of believing that economics can generate the kind of precision that the natural sciences can.  Yet, the enforcement agencies are approaching a point where their econometric analysis of market data in certain instances may be considered sufficiently “scientific” to determine enforcement decisions without needing to know much else about the businesses or products at issue.

Much of this is driven by advancements in cheap computing coincident with the widespread adoption of electronic data storage by businesses.  These developments have yielded a rich set of market data that can be readily obtained by subpoena, coupled with the ability to use that data as input into econometric estimation that can be done cheaply on a desktop.  So, for example, if it is possible to estimate the competitive effects of a merger directly, why bother with more traditional (and tedious) methodology that includes defining relevant markets and calculating concentration indexes?  In principle, even traditional documentary and testimonial evidence might be dispensed with, being unnecessary when there is hard “scientific” evidence available.

This view is worrisome for two reasons:  The first is the already stated Hayekian concern about the pretense of economic precision.  Any good statistician will tell you that econometrics is as much art as science.  Apart from this concern, however, an equally important worry is that antitrust enforcement policy is becoming too arcane in its attempt to be ever more economically sophisticated.  This means that it is increasingly difficult for businesspersons and their counsel to evaluate whether some specific conduct or transaction could be challenged, thus making even lawful business strategies riskier.  A basic principle of the rule of law is that the law must be understandable to those subject to it.

Regrettably, the Obama Administration has exacerbated this problem.  For example, some officials have indicated sympathy for so-called “Post-Chicago Economics,” whose proponents have set out highly stylized models that purport to show the possibility of anticompetitive harm from conduct that has not yet been reached by antitrust law.  Administration officials also rescinded a Bush Administration 2008 report that attempted to lay out clearer guidelines regarding when unilateral conduct might be challenged.  Although these developments have been mostly talk and not much action in the way of bringing novel cases, even mere talk increases legal uncertainty.

The Administration’s merger policy actions are more concrete.  The DOJ and FTC issued new guidelines in 2010 that, in an effort to be even more comprehensive, proliferated the number of variables that can be considered in merger analysis.  In some instances, these variables will be resistant to reliable measurement and relative weighting.  The consequence is that the new guidelines largely defeat the purpose of having guidelines – helping firms assess whether a prospective merger will be challenged.  Thus, firms considering a merger must often do so in the face of substantially more legal uncertainty and must also expend substantial funds on attorneys and consultants to navigate the maze of the guidelines. These factors likely deter at least some procompetitive mergers, thus forgoing potential social gains.

Antitrust policy certainly must remain grounded in good economics, and economic analysis is certainly probative evidence in individual cases.  But it is nonetheless appropriate to keep in mind that no legal regime can achieve perfection, and the marginal benefits from efforts to obtain ever greater economic sophistication must be weighed against the marginal costs of doing so.  When litigation devolves into simply a battle of expert witnesses whose testimony is based on arcane modeling that neither judges nor business litigants grasp well, something is wrong.

It is time to consider a modest return to simpler and more transparent enforcement policy that relies less on black box economics that pretends to be more scientific than it really is.  To be sure, clearer enforcement rules would not be without enforcement risk.  Some anticompetitive transactions could escape challenge.  But, procompetitive transactions that otherwise might have been deterred will be a social gain.  Moreover, substantial social cost savings can be expected when business decisions are made under greater legal clarity, when antitrust enforcement is administered more efficiently, and when litigation costs are substantially lower.  The goal of antitrust policy should not be perfection, but to maintain an acceptable level of workable competition within markets while minimizing the costs of doing so.  Simpler, clearer rules are the route to this end.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.