Vienna Woods Law & Economics

Blog focused on issues in law, economics, and public policy.

Europe’s Latest Antitrust Fine Treads on U.S. Economy, Sovereignty – By Asheesh Agarwal — October 14, 2025

Europe’s Latest Antitrust Fine Treads on U.S. Economy, Sovereignty – By Asheesh Agarwal

[Note: This post originally appeared at DC Journal on Oct. 13, 2025.]

The European Union is taking upon itself the authority to restructure the U.S. economy. In its latest move, the EU imposed a staggering $3.45 billion fine on Google for alleged antitrust violations and signaled that “only the divestment” of Google’s components would resolve the matter. In the meantime, France fined Google 325 million euros over privacy issues. These moves raise concerns about Europe’s motives, opacity, disregard for international comity, and broader agenda to target successful American companies.

For the United States, the moves raise the prospect that foreign regulators, motivated by protectionist impulses and budgetary constraints, may assume the authority to restructure innovative American companies.  As President Trump wrote in a message that should resonate across the Atlantic, “We cannot let this happen to brilliant and unprecedented American ingenuity.”

Europe’s latest fines are part of a pattern in which it deliberately uses exorbitant penalties to transfer wealth from American companies, workers and shareholders to European coffers. As outlined in an extensive study, the Europeans afford themselves “significant fining authority” and “maximum discretion” to impose billions of dollars in fines on American companies based on ambiguous statutes.  To date, the “fines against American companies have been orders of magnitude larger than those imposed on domestic firms.”

As Trump pointed out, Google has paid “$13 Billion Dollars in false claims … How crazy is that?”

Indeed, the fines’ size and opacity call into question whether U.S. firms in Europe can receive even a semblance of due process. Despite issuing a multibillion-dollar fine, the EU provided almost no clarity on its calculation, saying only that it “considered various elements,” such as “the duration and gravity of the infringement,” Google’s European ad turnover, and past fines on Google.  

The EU, however, never explained whether the fine correlated with actual consumer harm or higher prices. Fines of this magnitude, untethered to demonstrable harm, appear to transform enforcement actions into revenue-generating schemes. The EU’s lack of transparency undermines trust and raises questions about its commitment to due process.

Even more concerning is the EU’s desire to break apart Google as a remedy. Such an unprecedented move would shatter norms of international law enforcement comity. The United States, home to Google and its parent company, Alphabet, has a robust antitrust framework. 

Emphatically, it is not the place of Europe to dictate the structure of an American company, especially when U.S. courts and regulators are actively considering similar issues. The EU’s overreach infringes on U.S. sovereignty, creating a chaotic and fragmented regulatory environment.

Notably, the EU is imposing these drastic remedies for conduct that is, at worst, ambiguous from a competitive standpoint. The conduct at the heart of the EU’s case — so-called “self-preferencing” — is a common and often pro-competitive practice among vertically integrated companies. Retailers promote their private-label products, streaming platforms highlight their original content, and countless other businesses engage in similar practices. While it is true that a federal court has found Google’s practices problematic (a decision subject to appeal), any remedies should tie directly to consumer harm and consider that similar conduct has often been found to be pro-competitive.

Beyond the facts of this case, the fine raises broader concerns for the Western alliance and the race for global technological supremacy. Excessive fines and aggressive regulatory actions risk stifling innovation and investment while benefiting Chinese companies, which often operate with significant state support and fewer regulatory constraints. 

In his AI Action Plan, Trump declared that “it is a national security imperative for the United States to achieve and maintain unquestioned and unchallenged global technological dominance,” a goal that “requires the Federal government to create the conditions where private-sector-led innovation can flourish.”

How can American companies innovate to their fullest potential when foreign regulators threaten them with dismemberment and exorbitant multibillion-dollar fines? At a time when the global tech race is intensifying, the EU’s actions could suppress innovation and investment.

Benjamin Franklin’s famous political cartoon, Join or Die, highlighted the importance of American unity in the face of European incursions. Today, consistent with Trump’s position, U.S. policymakers and private enterprise must stand together in defending America’s economy and sovereignty from discriminatory and excessive regulatory actions.

 

* Asheesh Agarwal is the president of Agarwal Strategies. He previously served in senior roles at the U.S. Justice Department (DOJ) and the Federal Trade Commission (FTC). He wrote this for InsideSources.com.  See more about Mr. Agarwal here, including his other posts.

FTC Alumni Response to FTC/DOJ RFI on Serial Acquisitions — June 26, 2024

FTC Alumni Response to FTC/DOJ RFI on Serial Acquisitions

Posted by

Theodore A. Gebhard*

On May 23, 2024, the Department of Justice and the Federal Trade Commission announced that they were jointly launching an inquiry into the potential competitive effects of serial acquisitions and roll-up strategies. The inquiry will assess the potential antitrust liability arising from such acquisitions. The Agencies’ public announcement defines serial acquisitions and roll-up strategies as follows:

“Serial acquisitions and roll-ups are a form of corporate consolidation where a company becomes larger — and potentially dominant — by buying several smaller firms in the same or related sectors or industries.”

The link below is to a letter written by former federal antitrust enforcers to the DOJ and the FTC responding to the Agencies’ request for comments on their inquiry. The former enforcers urge the Agencies to conduct the inquiry in such a way so as to build confidence in its objectivity and comprehensiveness. I am a signatory on the letter.  I encourage readers of this post to read the letter in its entirety. TAG


FTC Alumni Comments on Serial Acquisitions


[Note: The alumni comments are cross-posted at Truth on the Market.]


* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

FTC Alumni Comments on Proposed Hart-Scott-Rodino Form — September 26, 2023

FTC Alumni Comments on Proposed Hart-Scott-Rodino Form

Posted by

Theodore A. Gebhard

The Biden Federal Trade Commission has proposed revisions to the Hart-Scott-Rodino reporting form required of all parties proposing to merge and/or proposing to acquire the assets of another company, whenever certain threshold metrics are present. In response to this proposal a number of former FTC officials, of which I am one, submitted comments to the Commission with our views on the proposed revisions.  The former officials suggest several ways by which the FTC could strengthen the evidentiary and legal foundations in support of the proposed revisions.  A link to the submission is below, and I encourage the reader of this post to read it in its entirety. TAG

FTC Alumni Comments on proposed HSR form  

[Note: The submission is also cross-posted at the International Center for Law & Economics and at Liberty & Markets.]

* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

FTC Alumni Comments on the Confidentiality of the Agency’s Investigations — April 13, 2023

FTC Alumni Comments on the Confidentiality of the Agency’s Investigations

Posted by

Theodore A. Gebhard*

The link below is to a letter written this month by former FTC officials expressing concern about possible lapses in the Agency’s integrity and fairness in keeping business information confidential during investigations. In several instances, FTC personnel may have leaked confidential information, or their analyses of confidential information, to the media about ongoing investigations. The former officials, of which I am one, urge the Commission to reassure the public, and to remind all agency personnel, that the Agency’s investigations will and must remain confidential. I encourage readers of this post to read the letter in its entirety. TAG

FTC Alumni Comments on Confidentiality and Due process

* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

FTC Alumni Comments on the Commission’s Proposed Non-Compete Clause Rule — March 21, 2023

FTC Alumni Comments on the Commission’s Proposed Non-Compete Clause Rule

Posted by

Theodore A. Gebhard*

The Federal Trade Commission has begun a rulemaking process with a stated goal of promulgating and implementing a Non-Compete Clause Rule that would prohibit employers from contractually conditioning hiring on an agreement that the employee will not render his or her services to a competing employer, should there come a time where the employee no longer works for the initial employer. In connection with the rulemaking procedure, several former FTC Officials, of which I am one, submitted comments to the Commission in which the Officials express a number of concerns with the rulemaking process and with the proposed rule’s potential impact on the FTC’s ability to fulfill its mission. A link to the submission is below, and I encourage readers of this post to read the submission in its entirely. TAG

FTC Alumni Comments on Non-Compete Proposed Rule

* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

Big Questions About Tech Regulation – By Asheesh Agarwal — January 6, 2023

Big Questions About Tech Regulation – By Asheesh Agarwal

[Note: This post originally appeared at Law & Liberty on Jan. 4, 2023]

What balance ought we seek in corralling the worst of tech companies while not killing innovation? At a time when everyone has a beef with tech, this question has never been more important. Republicans want tech to allow more speech, Democrats want tech to moderate more content. Competitors want to break up the biggest companies, customers want to pay them less. Tech’s spouse wants it to clean out the garage.

Still, nearly everyone recognizes tech’s value to the economy and to the country. Even as policymakers propose new rules, no one wants to “throw out the baby with the bathwater.” So, what principles should apply to tech regulation? A sixteenth-century German idiom has only limited utility as a touchstone for regulating a quarter of the U.S. economy.

As an alternate framework, policymakers should ask themselves certain key questions as they evaluate new rules. By inquiring about these topics, policymakers could ensure that they tailor any proposals to address genuine gaps in the market and the law and continue to promote innovation and investment.

Are there existing processes in place to corral the perceived troubling behavior?

Before enacting any new law, policymakers should ask themselves whether existing laws and processes already address any troubling behavior. If so, let those processes play out fully and enact new rules only if they fall short, because new rules could have damaging unforeseen and unintended consequences, particularly in such a dynamic and consequential sector of the economy

When it comes to tech, both market dynamics and current laws are working to corral troubling behavior. In terms of speech, the market is already adjusting to allow more fulsome discourse online, irrespective of political viewpoint. Elon Musk, a prominent advocate of free speech, has purchased Twitter and released a number of “Twitter Files” revealing how Twitter has amplified or downplayed certain controversial stories. Conservative platforms such as TruthSocial, Gettr, and Parler are widely available and have gained millions of users. Meta itself has acknowledged past mistakes and vowed to improve its transparency.

In terms of competitive concerns, the existing antitrust laws can address any genuine anticompetitive conduct that harms consumers. Every major tech platform—Apple, Amazon, Google, Meta, and Microsoft—is facing significant antitrust litigation, investigation, or both. If a court finds that these companies are harming consumers or reducing innovation, they can impose a range of significant remedies that include treble damages or even partial divestment. Policymakers should enact new laws only if the courts find that these companies are harming competition and that the current laws fail to provide adequate remedies. Otherwise, new rules could cause more harm than good. 

Is the proposal good for consumers?

At a very basic level, policymakers should ask themselves whether a given proposal is good for consumers, or if the proposal is mere rent-seeking designed to help other companies at the expense of their competitors, and often consumers.

For decades, U.S. antitrust law has centered on the venerable, bipartisan “consumer welfare standard,” which evaluates business conduct based on whether the practice helps or harms consumers. Europe, in contrast, uses the “abuse of dominance” standard which, in effect, evaluates business conduct based on whether the practice harms competitors.

U.S. policymakers must maintain the law’s focus on consumers, rather than competitors. This focus encourages innovation and investment by allowing companies to reap the benefits of new products and technologies. It incentivizes them to enter new markets and to out-compete their rivals. 

In sharp contrast, the European standard punishes companies for taking away market share from their competitors. It encourages less successful companies to complain to regulators rather than fight to compete. It lets the government pick winners and losers in the marketplace, irrespective of the merits of a business practice, and often for political reasons.

Unfortunately, the United States is creeping toward adoption of the European standard. Several recent federal and state bills would move American antitrust law closer to the European model. Moreover, the Federal Trade Commission recently adopted policy guidance that expressly embraces the idea that U.S. competition law should protect competitors. American policymakers should resist these proposals at every opportunity.

Does the proposal improve the operation of the marketplace by giving more choice to consumers, or does it threaten the marketplace by giving more power to regulators? 

Virtually every recent policy proposal promises to improve competition and to provide consumers with more choice. Upon closer review, however, most proposals actually would empower government regulators far more than consumers. Many bills would allow, say, the Federal Trade Commission to determine whether, when, where, and how companies can compete, force companies to share sensitive data with competitors, and require them to seek prior approval from regulators for routine business decisions from when to update their software to whether they can acquire small companies.

In contrast, other proposals would empower consumers by giving them more information and choice. For instance, provisions of state laws in Texas and Florida, and federal bills like the PACT Act, require companies to give their customers information about their content moderation decisions and the right to appeal those decisions internally. Another interesting bill would effectively devolve content moderation decisions to consumers. 

To be sure, each of these proposals may have its own constitutional infirmities given the paramount interests protected by the First Amendment, and the bills ultimately may prove unnecessary as private companies like Meta and Twitter are already adopting some of these concepts on their own. Still, to the extent that policymakers consider any new proposals, they should seek to empower consumers, not regulators.

Would the proposal encourage or discourage investment?

As above, virtually every recent policy proposal promises to enhance innovation—just like every food, beverage, and air freshener promises to help you lose weight and improve your social standing. Buyers should beware. Accordingly, a better, more tailored question is whether a particular proposal would encourage or discourage investment. Innovation is often intangible and amorphous; we can measure investment.

On this score, many recent policy proposals fall short. They expressly prohibit or discourage large companies from entering new markets or from investing in startups. Proponents argue that such limits would help smaller companies, without recognizing that those companies often need venture capital and technical expertise to monetize their technology and to bring new products to market. Throughout history, in both the tech sector and the economy at large, larger companies have helped to spur innovation through such investments.

Instead of limiting investment, policymakers should seek to reduce the cost of capital for newer companies—indeed, ease of financing is a key reason why the U.S. far surpasses Europe in investment and innovation.

How would the proposal affect the ability of the United States to further its interests abroad and to safeguard its national security?

Around the globe, many countries have placed a target on the U.S. tech sector. Europe, the United Kingdom, Turkey, Saudi Arabia, Australia, and many other countries have enacted or are considering new rules that would disadvantage U.S. tech companies. Many proposals stem from protectionist impulses, out of a desire to kickstart domestic tech companies, while others simply seek to transfer wealth from U.S. companies to their foreign counterparts. 

Our domestic policies can influence developments abroad. When Congress and the White House pursue anti-tech policies at home, those arguments undermine U.S. interests overseas. Washington cannot credibly encourage other countries to adopt free market policies even as it seeks to kneecap its own tech companies with aggressive regulation. On the other hand, when Washington embraces traditional principles of antitrust law, and when it allows the market and legal process to play out freely, those actions send powerful messages to other capitals around the world. The same might be said of speech—the more Washington seeks to dictate what private companies must say, the more comfortable other countries may feel in imposing even more significant constraints.

Finally, looming over everything is the specter of China, which wants to use its tech sector to supplant the United States as the world’s leading power. Whatever their complaints, policymakers should ensure that they allow the tech sector the freedom to develop the new technologies that underpin America’s economy and security. At the very least, every tech “reform” bill should undergo a rigorous analysis of its potential impact on national security.

Would the proposal impose unsustainable compliance and litigation costs, especially on smaller companies?

In evaluating tech proposals, at least three truths are self-evident: (a) the law should apply equally to everyone, (b) high litigation and compliance costs discourage innovation, and (c) large companies can absorb regulatory and litigation costs more easily than smaller companies.

Accordingly, policymakers should keep in mind the impact of new proposals on smaller companies and startups. Although seemingly everyone wants to regulate Big Tech, policymakers must consider whether smaller companies can absorb the costs and risks of new proposals, such as changes to Section 230 or forced interoperability among competing platforms. New regulations could strangle in their infancy startups and smaller companies which, by definition, have lower revenue and less of a legal and policy infrastructure. 

Some recent tech proposals exempted smaller companies through artificial thresholds based on market capitalization or number of users. Whether or not those proposals would have withstood legal scrutiny, they flouted the American legal tradition of equality before the law—if a proposal is good policy, it should apply to everyone.

Bonus question, for tech companies: Are you imposing political policy preferences on the public?

With great power comes great responsibility. 

Much of the concern with tech companies flows from their content moderation decisions, or lack thereof. As policymakers consider various proposals in Washington, it would behoove the companies to ask themselves if they are serving as honest speech brokers or, as Musk’s Twitter Files seem to suggest, if certain companies have used their platforms to tilt the discourse in favor of certain policy preferences. This is particularly true when many of the most controversial speech decisions were made at the behest of government officials who had their own agendas, whether at the FBI, the White House, or the Department of Health and Human Services. To that end, one bill would prohibit government officials from using their authority to influence companies to suppress speech. 

The Supreme Court will soon provide guidance about the scope of legal immunity when tech companies moderate, or fail to moderate, content on their platforms. Regardless of how the Court answers that question, the tech companies should ask themselves whether they are exercising their rights in ways that are consistent with the American tradition of free speech. No doubt the companies have a right to run their businesses as they see fit and a responsibility to maximize shareholder value. Still, although the companies may have the right to suppress certain speech on their platforms, it does not necessarily follow that it is just and proper for them to do so. 

***

As the public, policymakers, and the private sector consider how to balance innovation and regulation, everyone could benefit from a little humility. Policymakers should recognize that sometimes problems can take care of themselves, whether through market forces or existing laws. The private sector should consider that a little transparency and neutrality could go a long way in forestalling the enactment of onerous rules that could hamstring innovation for years to come. 

Should we expect real humility in Silicon Valley and Washington? Who knows—as the Germans might say, man hat schon Pferde kotzen gesehen—“You already saw horses vomiting,” or, “crazier things have happened.”

Asheesh Agarwal is an attorney and an advisor to the American Edge Project and the U.S. Chamber of Commerce.  See more on the Contributors page.

FTC Alumni Open Letter to the Commission — September 20, 2022

FTC Alumni Open Letter to the Commission

Posted by

Theodore A. Gebhard*

The Biden Federal Trade Commission, led by Chairwoman Lina Khan, has taken an unusually expansive position on the scope of the antitrust laws and the Commission’s powers under Section 5 of the FTC Act. The link below is to an open letter written by former FTC officials, of which I am one, to Chairwoman Khan and the other Commissioners. The letter urges the Commission to be cognizant of traditional norms in antitrust enforcement and the limits of its authority, and to be judicious in case selection and in its theories of competitive harm. I encourage the reader of this post to read the letter in its entirety. TAG

FTC Alumni Open Letter to the Commission – final

* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

Antitrust Scholars and Former Federal Antitrust Enforcers’ Letter to the FTC Expressing Concerns about Altering Enforcement Principles — July 1, 2021

Antitrust Scholars and Former Federal Antitrust Enforcers’ Letter to the FTC Expressing Concerns about Altering Enforcement Principles

Posted by

Theodore A. Gebhard*

The Biden Federal Trade Commission has proposed modifying, or even possibly rescinding, its Statement of Enforcement Principles On Unfair Methods of Competition under Section 5 of the FTC Act. The link below is to a letter prepared by a number of antitrust scholars and former federal antitrust enforcers commenting on this proposal. The signatories express concern that the Commission will be considering a significant shift in enforcement policy and may go so far as to revoke the existing statement, which provides a bipartisan framework laying out widely agreed upon core principles regarding antitrust law and the Commission’s Section 5 enforcement. These principles include the promotion of consumer welfare and focusing enforcement on acts or practices that “must cause, or be likely to cause, harm to competition or the competitive process.” The signatories’ concern is that a rescission of the current statement could untether the Commission’s enforcement decisions from a focus on harms to consumers and the competitive process. Ashley Baker of the Alliance on Antitrust was the principal drafter of the letter. I am a signatory, and I encourage readers of this post to read the letter in its entirety. A link is below. It is also cross-posted at the Alliance on Antitrust website and on the Council for Citizens Against Government Waste website.  TAG

Coalition Comments on Rescission of Enforcement Principles

* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

Positive Legislative Antitrust Agenda for Congress and the New Biden Administration — March 5, 2021

Positive Legislative Antitrust Agenda for Congress and the New Biden Administration

Posted by

Theodore A. Gebhard*

The link below is to a letter prepared by a coalition of former federal antitrust enforcers and antitrust scholars, which was sent to the members of the Senate and House Committees that have jurisdiction over the Department of Justice’s and Federal Trade Commission’s antitrust missions. The letter sets out a Positive Legislative Agenda for the nation’s competition policy, which should also serve as guidance to the new Biden Administration antitrust enforcers. Ashley Baker of the Alliance on Antitrust was the principal drafter of the letter.  I am a signatory.  I encourage the reader of this post to read the letter in its entirety.  TAG

Antitrust-Positive-Agenda

* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

Supreme Court Rules in Tennessee Wine & Spirits Retailers Assn. v Thomas [previously, Blair] — June 26, 2019

Supreme Court Rules in Tennessee Wine & Spirits Retailers Assn. v Thomas [previously, Blair]

Posted by

Theodore A. Gebhard*

Today the U.S. Supreme Court handed down its decision in Tennessee Wine and Spirits Retailers Association v. Thomas, Executive Director of the Tennessee Alcohol Beverage Commission et al., 588 U.S. 504-57 (June 26, 2019).  In an earlier post on this site, I discussed this case in the context of the amicus brief (Law & Economics Scholars Brief) that five amici curiae, of which I was one, submitted to the Court in this matter.  Please see the earlier post for appropriate background on the case and the amicus brief.)

In an opinion written by Justice Alito, the Court found that “the predominant effect of [Tennessee’s] 2-year residency requirement [that applicants for licenses to sell alcoholic beverages at retail must satisfy] is simply to protect the Association’s members from out-of-state competition.”  588 U.S. at 543.  This finding is wholly consistent with the discussion set out in the Law & Economics Scholars Brief

Because the predominant effect of the durational residency requirement is to discriminate against out-of-state potential competitors, the Court held that the “provision violates the Commerce Clause and is not saved by [Sec 2 of] the Twenty-first Amendment,” as the scope of state regulatory powers under Sec. 2 does not extend to implementing such anticompetitive impediments.  Id. “Where the predominant effect of a law is protectionism, not the protection of public health or safety, it is not shielded by Sec. 2” 588 U.S. at 539-40.

Interestingly, Justice Gorsuch, joined by Justice Thomas, dissented, finding that because Sec. 2 of the 21st Amendment provides states with broad powers to regulate the sale of alcohol that they otherwise would not have respecting other goods and services, the explicit language of the Amendment provides sufficient scope to encompass the kind  of regulation at issue in this matter notwithstanding its effect on competition. The adopters of the 21st Amendment “left us with clear instructions that the free-trade rules this Court has devised for ‘cabbages and candlesticks’ should not be applied to alcohol.” 588 U.S. at 557. Justices Gorsuch and Thomas would accordingly find Tennessee’s durational residency requirement constitutional.

Read the entire opinion here.

* Theodore A. Gebhard is an attorney and economist.  See his mini-bio on the Contributors page.

The Enduring Legacy of Henry Manne: A Review of the 2016 Law & Economics Center Conference – By Theodore A. Gebhard — January 26, 2016

The Enduring Legacy of Henry Manne: A Review of the 2016 Law & Economics Center Conference – By Theodore A. Gebhard

On Friday, January 22, I attended the Fifth Annual Henry G. Manne Law & Economics Conference, which was sponsored by the Law & Economics Center at George Mason University.  The Conference was held in conjunction with the Twelfth Annual Symposium of the Journal of Law, Economics, & Policy, a publication of the GMU School of Law.  This year’s Conference was entitled, “The Enduring Legacy of Henry G. Manne,” and featured three panels of academic experts, all of whose research draws substantially on the work of the late Henry Manne.  Manne was the long time Dean of the Law School, and a trailblazing scholar of corporate governance and corporate finance.

Regrettably, owing to inclement weather, the day’s program had to be truncated, including dropping the scheduled Key Note luncheon speech by former Securities and Exchange Commission commissioner, Kathleen Casey.  Even under the shortened time frame, however, the panel discussions were thorough and highly informative.

Panel 1:

The first panel focused on Manne’s seminal 1965 Journal of Political Economy article, “Mergers and the Market for Corporate Control.”  Speakers included GMU business professor, Bernard Sharfman, and University of Chicago Law professor, Todd Henderson.

In the JPE article, Manne developed the then-novel insight that when a corporation is afflicted with inefficiencies owing to poor management, an incentive is created for others to take control of the corporation, eliminate the managerial inefficiencies, and be rewarded with an increase in share price.  What this means is that there is a functioning market for corporate control.

Drawing on this insight, Professor Sharfman considered whether activist investors might be able to perform the same function, specifically activist hedge funds.  One key difference between activist investors and take-over investors is that the former typically are not able to obtain a controlling interest in a corporation.  Although the interest can be significant, it falls short of the authority to dictate managerial changes.  Therefore, when activist investors see managerial inefficiencies, they must rely principally on persuasion to influence corrective action.

Corporate boards, however, often, if not most of the time, resist this activism.  In some instances, the boards might go so far as to sue in court for relief.  When this occurs, the courts are bound by the “business judgment rule,” which provides for deference to the decisions of corporate boards.  Sharfman contends that, although the business judgement rule is based on solid grounds and usually works well as a legal rule, it fails under the circumstances just described, i.e., when there are managerial inefficiencies but activist investors are unable to obtain a controlling interest in the company.  Sharfman concludes, therefore, that it might be time for the courts to carve out, albeit carefully, an exception to the business judgment rule in cases where the evidence points to no plausible business reason to reject the activists’ position.  In this circumstance, a court can find that the board’s resistance likely owes to no more than an attempt to protect an entrenched management.

Building on Manne’s insight of the existence of a “market” for corporate control, Professor Henderson considered the possibility of such diverse hypothetical markets as (1) markets for corporate board services, (2) markets for paternalism and altruism, and (3) markets for trust.  In the first instance, Henderson posited the possibility that shareholders simply contract out board services rather than having a board solely dedicated to one company.   So, for example, persons with requisite expertise could organize into select board-size groups and compete with other such groups to offer board services to the shareholders of any number of separate corporations.

In the second instance, Henderson, noting the growing modern viewpoint that companies have paternalistic obligations toward stakeholders that go beyond shareholder interests, suggested that the emergence of a competitive market to meet such obligations would likely be superior to relying on evolving government mandates.  Competitive markets, for example, would avoid delivering “one size fits all” services and, in so doing, be better able to delineate beneficiary groups on the basis of their specific needs, i.e., needs common within a group but diverse across groups.  Inefficient cross-subsidization could thus be mitigated.  In this same vein, Henderson suggested the possibility of a market for the delivery of altruistic services.  He noted that the public is increasingly demanding that corporations, governments, and non-profits engage in activities deemed to be socially desirable.  As with the provision of paternalism, competitively supplied altruism whereby companies, governments, and non-profits comprise the incumbent players would yield the positive attributes of competition.  These would include the emergence of alternative mixes of altruistic services tailored to the specific needs of beneficiaries and efficient, low cost production and delivery of those services.

In the third instance, Henderson posited the idea of a market for trust.  Here he offered the example of the ride-sharing company, Uber.  Henderson suggested that Uber not only competes with traditional taxis, but, perhaps more importantly, competes with local taxi commissions.  Taxi commissions exist to assure the riding public that it will be safe when hiring a taxi.  Toward that end, taxis are typically required to have a picture of the driver and an identifying number on display, be in a well maintained condition, and have certain other safety features.  All of these things are intended to generate a level of trust that a ride will be safe and uneventful.  According to Henderson, Uber’s challenge is to secure a similar level of trust among its potential customers.  New companies shaking up other traditional service industries face the same challenge.  Henderson concludes, therefore, that these situations open up entrepreneurial opportunities to supply “trust.”  Although Henderson did not use the example of UL certification, that analogy came to mind.  So, for example, there might be a private UL-type entity in the business of certifying that ride sharing (or any other new service company) is trustworthy.

In commenting on Panel 1, Bruce Kobayashi, a GMU law professor and former Justice Department antitrust economist, offered one of the more interesting observations of the day.  Professor Kobayashi reminded the audience that Manne’s concern in his JPE article was principally directed at antitrust enforcement, not corporate law.  In particular, Manne argued that the elimination of managerial inefficiency should rightly be counted as a favorable factor in an antitrust analysis of a merger.  In fact, however, although the DOJ/FTC Horizontal Merger Guidelines allow for cognizable, merger-specific efficiencies to be incorporated into the analysis of net competitive effects, the agencies historically only consider production and distribution cost savings, not the likelihood of gains to be had from jettisoning bad management.  Kobayashi suggested that this gap in the analysis may be due simply to the compartmentalization of economists among individual specialties.  For example, in his experience, he rarely sees industrial organization (antitrust) economists interacting professionally with economists who study corporate governance.  Thus, because Manne’s article, through the years, has come to be classified (incorrectly) as solely a corporate law article, its insights have unjustifiably escaped the attention of antitrust enforcers.

Panel 2:

Panel 2 focused on Henry Manne’s seminal insights about “insider trading.”  Manne was the first to observe that, notwithstanding the instinctive negative reaction of many, if not most, people to insider trading (“It’s just not right!”), the practice actually has beneficial effects in terms of economic efficiency.  By more quickly incorporating new information about a business’s prospects into share price, insider trading can accelerate the movement of that price toward a market clearing level, which signals a truer value of a company and thus enhances allocative efficiency in capital flows.

The two principal speakers on Panel 2 were Kenneth Rosen, a law professor at the University of Alabama and John Anderson, a professor at the Mississippi College School of Law.  I will limit my comments to Anderson.

In addition to being a lawyer, Professor Anderson is a philosopher by training, holding a Ph.D. in that subject.  He began his presentation by noting that ethical claims are often vague in a way that economic analysis, given its focus on efficiency, is not.  Although in any empirical study, there can be problems with finding good data and there can be measurement difficulties, the analytical framework of economics rests on an objective standard.  Either a given behavior increases efficiency, reduces efficiency, or is benign toward efficiency.  Anderson also observed that ethics merely sets goals, while economic analysis determines the best (i.e., most efficient) means to achieve those goals.

Putting these distinctions into the context of insider trading, Anderson finds that insider trading laws and enforcement of those laws are likely overreaching.  The current statutory scheme rests largely on ethical notions, namely the issue of unfairness (“It’s just not right!”).   As such, it rests on vague standards.  The result is a loss of economic efficiency and costly over-compliance with the laws.  In the end, not only are shareholders hurt (e.g., by costly compliance and litigation), but the larger investing public is also harmed owing to slower price adjustments.  Anderson made the point that, although acts of greed may be bad for individual character, such acts are not necessarily bad for the entire community.

In concluding his presentation, Anderson proposed that some form of licensing of insider trading might best accommodate the competing ethical and efficiency goals.  Under a licensing system, insider trading could be permissible in certain circumstances, but under full transparency.

Panel 3:

Panel 3 considered the effects of required disclosures under federal security laws.  The two principal speakers were Houman Shadab of New York Law School and Brian Mannix of George Washington University.

Despite being a novice in the subject matter of the day, I felt that I followed the discussions of the first two panels reasonably well.  Panel 3, however, was difficult for me, as the speakers made frequent references to statutory provisions and SEC interpretations of relevant security law, with which I have no familiarity.  Nonetheless, a portion of the discussion intrigued me.

In particular, Professor Mannix discussed the issue of high frequency trading (HFT), a hot topic just now because of Michael Lewis’s recent book, Flash Boys.  As I understand it, computer technology makes it possible for trades to take place within microseconds.  With evermore sophisticated algorithms, traders can attempt to beat each other to the punch and arbitrage gains even at incredibly small price differences.

Of particular interest to regulators is the likelihood of a tradeoff inherent in HFT that determines net efficiency effects.  On the one hand, HFT has the potential to enhance efficiency by accelerating the movement of a share price to its equilibrium.  Given that this movement is tiny and occurs within a microsecond, however, this efficiency gain may not be significant.  Indeed, the efficiency gains, if any, are likely to be very slight.

On the other hand, a lot of costly effort goes into developing and deploying HFT algorithms.  Yet, much of the payoff may be no more than a rearrangement of the way the arbitrage pie is sliced rather than any enlargement of the pie.  If so, the costs incurred to win a larger slice of the pie, costs without attendant social wealth creation, likely exceed any efficiency gain.  It may make sense then for regulators to create some impediments to HFT.  Toward this end, Mannix offered some possible ways to make HFT less desirable to its practitioners.  The most intriguing was to put into place technology that would randomly disrupt HFT trades.  Key to profiting substantially from HFT is the need to trade in very large share volumes because the price differences over which the arbitrage takes place are so small.  Random disruptions would make such big bets riskier.*

Final Thoughts:

All in all, I found each of panels to be highly informative.  The cast of speakers was well selected, and each presentation was well made.  Significantly, each panel did a very good job of tying its discussion to Henry Manne’s work and influence.  In addition to learning a great deal about current hot issues in corporate law and how economic analysis informs those issues, conference attendees surely left with an even higher appreciation of Henry Manne.

On the logistical front, in light of last minute adjustments necessitated by weather conditions, the organizers pulled off the conference flawlessly.  A high standard was set that will be difficult for the organizers of next year’s offering to surpass.

Finally, on a personal note, Henry Manne was my law school Dean at GMU and also a neighbor for many years in my condominium.  I remember Henry most, however, because of the dramatic impact that his unique curriculum at GMU had on my intellectual development.  That curriculum, which emphasized the application of economic analysis to the law, literally changed the way I think about the world.  For this reason, I was especially pleased that each of the speakers at the Conference took time to comment on the influence that Manne had on them.  Some had never met Manne in person, but nonetheless were deeply influenced by his scholarly work.  Others who knew Manne more intimately related some wonderful anecdotes.  Those alone would have made the day worthwhile.

Notes:

* For a further and more detailed explanation of the tradeoffs inherent in HFT, readers are encouraged to see a blog post by Todd Zywicki, the Executive Director of the LEC, which appeared here.

Theodore A. Gebhard is  a law & economics consultant residing in Arlington, Virginia.  See the Contributors page for more about Mr. Gebhard.  Contact him at theodore.gebhard@aol.com.  For more about Henry Manne, see the several tributes to him upon his passing on David Henderson’s blog, including one by Mr. Gebhard.

Revenue Generation from Law Enforcement—Unintended Consequences? – By Stefan N. Hoffer — January 24, 2016

Revenue Generation from Law Enforcement—Unintended Consequences? – By Stefan N. Hoffer

Should law enforcement be conducted with an objective of maximizing public revenues? Should police departments be allowed to retain revenues they generate through fines and forfeitures?  An argument might be advanced that pursuing an objective of revenue generation will encourage law enforcement to enforce the law more effectively, particularly if revenues generated are returned to law enforcement agencies.  This essay argues that such policies may have surprising and unintended results in many common, everyday situations.

From time to time there are reports in the media of law enforcement seizing property using asset forfeiture laws. The crimes involved are typically serious, and convicted violators are subject to severe penalties including large fines. These actions occur in situations where it is difficult and expensive to identify offenders correctly given the large pool of un-identified offenders.  Because of the difficulty and expense involved, pursuing an objective of maximizing revenues and then recycling them back into enforcement activity would create incentives for enhanced enforcement efforts against a seemingly inexhaustible pool of offenders.

Although this result may be true under circumstances where the pool of offenders remains large, it would be too easy to conclude that attempting to maximize revenues will universally result in improved enforcement. Instead, let us consider situations where it is easy to identify and fine most offenders correctly.  In such situations, it may well be that pursuing a revenue objective can lead to routine, systematic under enforcement of everyday laws—for example, traffic regulations.

To see how this might occur, consider a nobleman who is the owner of a hunting preserve. The nobleman knows that to ensure a perpetual supply of game, he must not over-hunt the preserve.  If he does, the volume of game will decline and in the extreme become non-existent.  The “wise” nobleman will only engage in limited hunting so as to ensure a continuous stream of game in perpetuity.

By analogy, a traffic intersection controlled by signals or a road with high occupancy vehicle (HOV) lanes can be thought of as a hunting preserve. If law enforcement seeks to eliminate red light running or HOV violators, it can do so by rigorous enforcement—the violators are hunted to extinction.  Once the consideration of revenue generation is introduced, however, there becomes a strong incentive to under-enforce the law.  The rationale is simple:  if you enforce to the extent that there are few violators, there will be little revenue.

More specifically, if revenue maximization becomes a significant consideration, there will be a level of enforcement short of complete enforcement that will maximize revenue. Enforcing more rigorously, say more days per month, will, other things constant, generate more revenue.  But, other things are not constant.  As the number of enforcement days grows, the number of violators on any one day to be caught will decline—people will learn that if they commit violations they will likely be caught.  At some point, the decline in violators will just offset the gain from enforcing one more day per month.

Casual observation lends support for the notion that revenue maximization can lead to under enforcement:

Interstate 66 HOV Lanes Inside the Washington, D.C. Beltway.

The segment of I66 between Washington D.C. and the beltway around the city has HOV lanes. Vehicles using these lanes during commuting hours are required to have a minimum number of occupants.  Those that do not are subject to a fine.  Because access to I66 is controlled, enforcement is straight forward.  Police stationed at the exits can easily stop cars that do not have the minimum number of riders and issue tickets—and they do, but not on all days or even many days.  As a consequence, violators are numerous and road congestion and delay more extensive that it would be if most violators were eliminated.  This raises the question of why is there not more aggressive enforcement?  A typical governmental response is that limited resources do not allow it.  But police departments, like other government service providers, do not staff for the mean demand for their services, but rather for a certain percentile of the peak.*  This strategy allows the police to meet most, but not all, randomly occurring emergencies.  It also means that most of the time the police department will have excess staff on duty.  So, why cannot these extra staff be utilized, absent an emergency, for duties such as enforcing HOV regulations?  As suggested, a reason consistent with observed behavior is that more revenue will be generated by enforcing such regulations only sporadically, thus ensuring that there are plenty of violators to be fined when enforcement does take place.

Red Light Cameras in Central Florida.

Several small and medium-sized cities in central Florida have installed “red light cameras” to ticket motorists that run red lights. The cameras are typically provided by a private contractor who installs and operates them in cooperation with law enforcement.  The cities share the revenue that the cameras generate with the private contractor, subject to a minimum guarantee that the cities pay the contractor irrespective of revenues collected.  Initially these cameras made most jurisdictions significant amounts of money.  But, over a period of time, they also drastically reduced the number of red light runners.  Revenues from the cameras precipitously declined, in some cases to the point that some local jurisdictions were losing money on them after meeting their contractual guarantee to the commercial provider.  As a result, some jurisdictions have not renewed their contracts and are removing the cameras, sometimes even after the private contractor has offered to reduce substantially the minimum payment guarantee. Other jurisdictions have accepted the idea that the cameras improve public safety and that this improved safety comes at a cost—what they must pay the private contractor over and above revenues generated so as to meet the payment guarantees.

As outlined above, a revenue maximizing approach to law enforcement can have unintended and surprising consequences. In situations where it is difficult to identify and catch offenders, it can provide resources and incentives for aggressive enforcement.  In other situations, where potential offenders can be easily and accurately identified and caught, focusing on revenue maximization has the potential to lead to deliberate, significant under enforcement.

*  The Federal Aviation Administration, for example, uses this model for staffing air traffic controllers.  See the FAA’s document, A Plan for the Future: 10-Year Strategy for the Air Traffic Control Workforce, at page 20, available here.

Stefan N. Hoffer is a transportation economist, formerly with the Federal Aviation Administration. His areas of specialization include benefit-cost analysis and valuation of non-market traded items.  See the Contributors page for more about Mr. Hoffer.  Contact him at snhoffer@aol.com.

 

Forecasting Trends in Highly Complex Systems: A Case for Humility – By Theodore A. Gebhard — June 20, 2015

Forecasting Trends in Highly Complex Systems: A Case for Humility – By Theodore A. Gebhard

One can readily cite examples of gross inaccuracies in government macroeconomic forecasting.  Some of these inaccurate forecasts have been critical to policy formation that ultimately produced unintended and undesirable results.  (See, e.g., Professor Edward Lazear, “Government Forecasters Might as Well Use a Ouija Board,” Wall Street Journal, Oct. 16, 2014)  Likewise, the accuracy of forecasts of long-term global warming is coming under increasing scrutiny, at least among some climate scientists.  Second-looks are suggesting that climate science is anything but “settled.” (See, e.g., Dr. Steven Koonin, “Climate Science and Interpreting Very Complex Systems,” Wall Street Journal, Sept. 20, 2014)  Indeed, there are legitimate concerns about the ability to forecast directions in the macro-economy or long-term climate change reliably.  These concerns, in turn, argue for government officials, political leaders, and others to exercise a degree of humility when calling for urgent government action in either of these areas.  Without such humility, there is the risk of jumping into long-term policy commitments that may in the end prove to be substantially more costly than beneficial.

A common factor in macroeconomic and long-term climate forecasting is that both deal with highly complex systems.   When modeling such systems, attempts to capture all of the important variables believed to have a significant explanatory effect on the forecast prove to be incredibly difficult, if not entirely a fool’s errand.  Not only are there are many known candidates, there are likely many more unknown candidates.  In addition, specifying functional forms that accurately represent the relationships between the explanatory variables is similarly elusive.  Simple approximations based on theory are probably the best that can be achieved.  Failure to solve these problems — omitting important explanatory variables and incorrect functional forms – will seriously confound the statistical reliability of the estimated coefficients and, hence, any forecasts made from those estimates.

Inherent in macroeconomic forecasting is an additional complication.  Unlike models of the physical world where the data are insentient and relationships among variables are fixed in nature, computer models of the economy depend on data samples generated by motivated human action and relationships among variables that are anything but fixed over time.  Human beings have preferences, consumption patterns, and levels of risk acceptance that regularly change.  This constant change makes coefficient estimates derived from historical data prone to being highly unsound bases on which to forecast the future.  Moreover, there is little hope for improved reliability over time so long as human beings remain sentient actors.

By contrast, models of the physical world, such as climate science models, rely on unmotivated data and relationships among variables that are fixed in nature.  Unlike human beings, carbon dioxide molecules do not have changing tastes or preferences.  At least in principle, as climate science advances over time with better data quality, better identification of explanatory variables, and better understanding of the relationships among those variables, the forecasting accuracy of climate change models should improve.   Notwithstanding this promise, however, long-term climate forecasts remain problematic at present.  (See Koonin article linked above.)

Given the difficulty of modeling highly complex systems, it would seem that recent statements by some of our political, economic, and even religious leaders are overwrought.  President Obama and Pope Francis, for example, have claimed that climate change is among mankind’s most pressing problems.  (See here and here.)  They arrived at their views by dint of forecasts that predict significant climate change owing to human activity.  Each has urged that developed nations take dramatic steps to alter their energy mixes.  Similarly, the world’s central bankers, such as those at the Federal Reserve, the European Central Bank, the Bank of Japan, and the International Monetary Fund regularly claim that their historically aggressive policies in the aftermath of the 2008 financial crisis are well grounded in what their elaborate computer models generate and, hence, are necessary and proper for the times.  Therefore, any attempts to modify the independence of these institutions to pursue those policies should be resisted, notwithstanding that the final outcome of these historic and unprecedented policies is yet unknown.

It is simply not possible, however, to have much confidence in any of these claims.   The macroeconomic and climate systems are too complex to be captured well in any computer model, and forecasts derived from such models therefore are highly suspect.  At the least, a prudent level of humility and a considerable degree of caution are in order among government planners, certainly before they pursue policies that risk irreversible unintended, and potentially very costly, consequences.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.

Is Economics a Science? – By Theodore A. Gebhard — May 15, 2015

Is Economics a Science? – By Theodore A. Gebhard

The great 20th Century philosopher of science, Karl Popper, famously defined a scientific question as one that can be framed as a falsifiable hypothesis.  Economics cannot satisfy that criterion.  No matter the mathematical rigor and internal logic of any theoretical proposition in economics, empirically testing it by means of econometrics necessarily requires that the regression equations contain stochastic elements to account for the complexity that characterizes the real world economy.  Specifically, the stochastic component accounts for all of the innumerable unknown and unmeasurable factors that cannot be precisely identified but nonetheless influence the economic variable being studied or forecasted.

What this means is that economists need never concede that a theory is wrong when their predictions fail to materialize.  There is always the ready excuse that the erroneous predictions were the fault of “noise” in the data, i.e., the stochastic component, not the theory itself.  It is hardly surprising then that economic theories almost never die and, even if they lie dormant for a while, find new life whenever proponents see opportunities to resurrect their pet views.  Since the 2008 financial crisis, even Nobel Prize winners can be seen dueling over macroeconomic policy while drawing on theories long thought to be buried.

A further consequence of the inability to falsify an economic theory is that economics orthodoxy is likely to survive indefinitely irrespective of its inability to generate reliable predictions on a consistent basis.  As Thomas Kuhn, another notable 20th Century philosopher of science, observed, scientific orthodoxy periodically undergoes revolutionary change whenever a critical mass of real world phenomena can no longer be explained by that orthodoxy.  The old orthodoxy must give way, and a new orthodoxy emerges.  Physics, for example, has undergone several such periodic revolutions.

It is clear, however, that, because economists never have to admit error in their pet theories, economics is not subject to a Kuhnian revolution.  Although there is much reason to believe that such a revolution is well overdue in economics, graduate student training in core neoclassical theory persists and is likely to persist for the foreseeable future, notwithstanding its failure to predict the events of 2008.  There are simply too few internal pressures to change the established paradigm.

All of this is of little consequence if mainstream economists simply talk to one another or publish their econometric estimates in academic journals merely as a means to obtain promotion and tenure.  The problem, however, is that the cachet of a Nobel Prize in Economic Science and the illusion of scientific method permit practitioners to market their pet ideological values as the product of science and to insert themselves into policy-making as expert advisors.  Significantly in this regard, econometric modeling is no longer chiefly confined to generating macroeconomic forecasts.  Increasingly, econometric forecasts are used as inputs into microeconomic policy-making affecting specific markets or groups and even are introduced as evidence in courtrooms where specific individual litigants have much at stake.  However, most policy-makers — let alone judges, lawyers, and other lay consumers of those forecasts — are not well-equipped to evaluate their reliability or to assign appropriate weight to them.  This situation creates the risk that value-laden theories and unreliable econometric predictions play a larger role in microeconomic policy-making, just as in macroeconomic policy-making, than can be justified by purported “scientific” foundation.

To be sure, economic theories can be immensely valuable in focusing one’s thinking about the economic world.  As Friedrich Hayek taught us, however, although good economics can say a lot about tendencies among economic variables (an important achievement), economics cannot do much more.  As such, the naive pursuit of precision by means of econometric modeling —  especially as applied to public policy — is fraught with danger and can only deepen well-deserved public skepticism about economists and economics.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.