Vienna Woods Law & Economics

Blog focused on issues in law, economics, and public policy.

Using Stereotypes in Decision Making, by Stefan N. Hoffer — November 5, 2018

Using Stereotypes in Decision Making, by Stefan N. Hoffer

A stereotype has been defined as a “fixed general image or set of characteristics that a lot of people believe represent a particular type of person or thing.” https://www.collinsdictionary.com/english/stereotype    Or, as “something conforming to a fixed or general pattern; especially : a standardized mental picture that is held in common by members of a group and that represents an oversimplified opinion, prejudiced attitude, or uncritical judgment.” https://www.merriam-webster.com/dictionary/stereotype  The implication is that the use of stereotypes in making decisions concerning populations or individual members of these populations is improper and should be avoided. [See for example, https://www.reference.com/world-view/stereotypes-harmful-dadca8d95b0fc67c or https://www.aauw.org/2014/08/13/why-stereotypes-are-bad] Yet stereotypes have long been and continue to be used to make decisions.

Focusing primarily on a simple summary measure, the following examines whether stereotypes can be used to make rational decisions and if so, under what circumstances. The essay considers both populations and individuals, taking into account that decision making requires information and that developing information is not costless. It is assumed that the information itself is accurate.

Stereotypes are summary measures. They may be very complex and consider an array of many characteristics. Examples would include an index of several factors such as a credit score—essentially a weighted average—or even a proprietary brand name and the quality it represents. Or stereotypes can be very simple and consider only one characteristic.

A very simple, perhaps the simplest, summary measure or stereotype for any particular population is its population mean with respect to one particular characteristic. A population mean provides a centrist measure of the location of the distribution of population members with respect to this characteristic. The variance of this population provides a measure as to how dispersed the individual members of the population are around its mean. Both measures provide significant information about the population. But distributions with large variances may provide only limited information about individual population members, creating a “fog of uncertainty.”

When making decisions about all members of a population taken as a whole, knowledge of the mean alone may be enough to make rational decisions irrespective of the variance. For example, consider auto insurance pricing where all or virtually all population members are required to carry insurance.  By setting a premium equal to the average, or expected, claim value per population member (plus administrative costs) insurance companies can provide risk sharing to all members of a population without taking into account variation among population members. This occurs because a characteristic of the mean is that the sum of all deviations from it, both plus and minus, will net to zero. [Harold W. Guthrie, Statistical Methods in Economics, Richard D. Irwin, Inc., Homewood Illinois, 1966. p. 34.] That is, those with less than average claims exactly offset those with greater than average claims.

But, what about making decisions that concern specific members of a population, not the entire population? Will relying on summary measures yield accurate decisions? Here the dispersion of the individuals around the mean, measured by the variance, comes into play. If the variance is small, a population summary measure can be expected to yield a relatively close approximation to each individual’s characteristic value. As the variance increases in size, however, the ability of a summary measure to predict the characteristic value of any particular individual will decline, as large variances have the potential to yield numerous, large errors with their associated costs Can rational decisions continue to be made in such situations?

Unlike the auto insurance example above where coverage is nearly universal and specific companies have large numbers of policy holders, most decisionmakers do not make decisions concerning most members of a population. Rather, their focus is on a subset of its members. For example, potential employers may hire a few members from a population or they may hire many members from it, but they do not hire everyone. In effect, each decisionmaker draws a random sample of a given size from the population when he or she makes new hires.

The distribution of the means of all the possible samples—subsets—of this size around the population mean (our summary measure) is known as the sampling distribution of the means. Its dispersion is measured by the population standard deviation divided by the square root of the sample size, or standard error. It will be larger when there is greater variation in the population but it also becomes smaller as the sample size increases. As a result, the accuracy of using any particular mean summary measure with any given level of variation depends on the frequency with which decisions are made. That is, the more decisions made the greater the sample size will be.

Some entities make decisions only infrequently. As a result, basing decisions on a summary measure such as a mean where there is significant variation in the population may not result in good outcomes for the infrequent decisionmaker. This situation arises because there are not enough decisions to make it likely that there will be enough good and bad outcomes to offset each other.  With higher population variances, the infrequent decisionmaker must choose between continuing to accept the risks and costs of a potentially bad outcome, abstain from making decisions altogether using the summary measure, or adopt a different approach. Unlike frequent decision makers discussed below, there are limited incentives for infrequent decision makers to incur the costs to develop improved measures because they cannot spread the costs across enough decisions to make it worthwhile.

As an example, skill and ability levels of recent college graduates vary significantly. The summary measure of having earned a degree is only a limited predictor of quality. Infrequent potential employers have the option of using this measure and risking a poor new hire, avoiding hiring altogether, or adopting a different approach such as recruiting only from premier, top-tier schools. The selection and training processes for people from these schools can be expected to produce graduates with higher and more uniform mean abilities. But hiring recruits with this pedigree quality measure may be more costly.

The situation can be expected to be different for frequent decisionmakers. Because they make numerous decisions—draw a larger sample—they can rely on summary measures with larger population variances and expect average outcomes to be close to the population mean much of the time as bad choices are offset by good ones. (The standard error of the larger sample size is smaller than for infrequent decisionmakers.)   In the example above, recruiters can hire college graduates with the anticipation that the ability levels of their new hires as a group will very likely approximate the mean of all graduates. They do not need to be preoccupied with getting bad apples because overall group performance is predictable and stable.

Although frequent decisionmakers can expect to make decisions with relatively stable overall outcomes based on summary measures, economic incentives may encourage them to improve their decision making by developing improved measures. Not only will improved measures allow the decisionmakers to select better members of the population, the costs of doing so can be spread across many decisions. An improved measure will give any particular decisionmaker a competitive advantage, at least until competitors are able to imitate the better measure or develop their own better measures.

Returning to the college graduate example, because some graduates of non-top tier schools are of similar quality to graduates of top-tier schools, an incentive exists for potential employers to develop measures to identify these better graduates. If this can be accomplished at an aggregate cost that is less than the premium salary margin that may have to be paid to top-tier-school graduates, firms can be expected to make the investment required to develop the more precise measure. In so doing, these firms can reduce their labor costs and may enjoy a competitive advantage. As noted, frequent decisionmakers will have a greater incentive to undertake such investments because development costs of a superior measure can be spread over a large number of new hires.

It should be acknowledged that making decisions based on a summary measure where there is significant population variation may not be fair to the individuals being evaluated. In the college graduate example, those individuals with desirable characteristics better than the mean will not be recognized as better while those with worse characteristics will be protected by the fog of uncertainty. To the extent that large population variances cause infrequent decisionmakers to abstain from making decisions altogether, all individuals may be potentially harmed.

If the alternative approach of hiring only from top tier schools is adopted, it may reduce the risk of poor hires but will also disadvantage higher quality candidates from second-tier schools. Should frequent decisionmakers adopt improved measures to identify better graduates from second-tier schools, these graduates likely will initially be underpaid. But their salaries can be expected to increase as other employers imitate and adopt these improved measures. Concurrently, less capable graduates of second-tier schools will no longer be shielded and can be expected to experience a relative decline in salaries.

Although infrequent decisionmakers have limited incentives to develop better measures, third parties may nonetheless do so. Especially in regard to high variance populations, the creation of improved summary measures, although too costly to be developed by individual decisionmakers, may still be undertaken by third parties if they can sell the measures to a sufficient number of consumers.

In the case of hiring, employment agencies often serve this function for infrequent decisionmakers. The agencies incur the costs of identifying the better members of a large variance population and spread this cost across placing numerous hires with smaller firms. Agencies can be compensated either by employers or job seekers or both.

With respect to consumer products, consider the marketing of used cars. The summary measure—stereotype—is that used cars are inferior. In fact, some are of very high quality.   Some car dealers have identified higher quality used cars and market them as “certified” merchandise guaranteed to meet certain standards. And the chain used car dealership CarMax has developed a reputation of offering better used vehicles with limited warranties.

In summary, decisionmakers can make rational decisions based on stereotypes under a wide variety of circumstances. Exceptions may occur when decisions are made infrequently or the decisionmaker would incur extensive losses should a poor outcome occur. In such cases decisions are either not made or a substitute, typically higher cost, approach adopted.

Decisions based on stereotypes also may not be considered fair to those individuals affected by the decisions. However, natural economic incentives exist to encourage development and adoption of better decision-making criteria. Indeed, a significant business opportunity exists for anyone who can develop improved selection measures where population variances are large. Although adopting better measures will in general improve decision outcomes for decisionmakers and for better than average members of stereotyped groups, inferior members of these groups can be expected to fare worse as they are no longer shielded by the fog of uncertainty.

Mr. Hoffer is a transportation economist, formerly with the Federal Aviation Administration.  Contact him at snhoffer@aol.com.  See the Contributors page for more about Mr. Hoffer.

Advertisement
Innovators and Regulators Meet to Discuss Blockchain in DC, by Cynthia M. Gayton, Esq. — August 11, 2016

Innovators and Regulators Meet to Discuss Blockchain in DC, by Cynthia M. Gayton, Esq.

On August 2, 2016, I volunteered to help register attendees at a blockchain conference in Washington, DC sponsored in part by Fintech Worldwide, Ltd.  I was looking forward to learning about engaging blockchain product solutions, but this being a DC conference, there were only a few commercial and non-profit business early adopters sprinkled among the lawyers and regulatory agencies. It was very well attended and the speakers sincere and considerate. In between my volunteering obligations, I was able to listen to some of the panel discussions and what follows is the result of my cursory note taking and impressions.

The first panel discussion to which I was able to pay full attention was called “Smart Contracts and Decentralized Autonomous Organizations (DAO)” The moderator was Luis Carrazana, a founder of the Blockchain Conference. The panelists were Michael Abramowicz (George Washington University School of Law), Drew Hinkes (Berger Singerman), Jenny Cieplak (Crowell & Moring) and Jacob Dienelt (Brian Kelly Capital Management). The panel discussion description stated that the purpose of the panel was to “look at the implications of (“sic”) some of the brightest minds in this emerging industry are doing and dreaming up.” In addition to the general fear-tinged commentary about technology and its potential illegal uses, such as money laundering, the discussion included the failure of The DAO (the business entity) to prevent several million dollars from being withdrawn from owners’ accounts, even when the organization became aware of the problem. The immutable/unchangeable nature of Ethereum smart contract code was at fault [Author’s note: The code was written in Solidity programming language by a company called Slock.it. There was a flaw in the program that allowed a known Ethereum user to move money from other Ethereum accounts to its own.] Jacob Dinelt appeared to be the only panel member who felt that the software code error result should have been left to stand and that the financial losers be left to lick their wounds because “the code is the law” in this particular software development environment.

[Author’s note: The code is not the only law. Laws exist even in the absence of or intentional waiver of them under many circumstances. If the parties had agreed to permit this sort of transaction, however, who is harmed? Specifically, was there any fraud? In the alternative, depending on where you live, the Uniform Commercial Code or its international equivalent, the rules set forth under the United National Convention on Contracts for the International Sale of Goods may apply if, as many have suggested, cryptocurrency is not currency, but a good. Is this distinction really a marketing/trade name problem? Would any of the cryptocurrency controversies have happened if, for example, the commodity was called “Purple” and an exchange was created for aficionados of baseball playing cards? How is this commodity different from, as another example, buying antique Pennsylvania currency on eBay?]

Drew Hinkes was asked to enlighten everyone about a recent Florida case where a bitcoin exchanger had money laundering charges against him dismissed. [Author’s note: Michael Espinoza was accused of selling Bitcoin for cash when the sale was conducted with undercover police officers. Florida has had its share of bitcoin related criminal activity, which suggests that the state has a general crime problem and not just a bitcoin currency problem.]

The next panel discussion, “Blockchain Use Cases in Commerce” moderated by Mr. Carranza with panelists Conan French (International Institute of Finance), Douglas Pearce (World Bank) and Tori Adams (Booz Allen), brought some social policy as well as possible applications to the discussion. Mr. Pearce pointed out that the technology could be used to bring money access to those who are “unbanked” and initiate a financial inclusion opportunity. According to Mr. Pearce, a significant problem for those with limited resources is lack of proof of collateral for purposes of credit assessment, and he is hopeful that blockchain technology will help remedy this problem. [Author’s note: Ms. Adams talked about The Smart City – which to this privacy and private property advocate brought to mind a parade of horribles where individuals could sell personally identifiable information in exchange for services. How a person could, on the fly, make sure that the information would not be resold and thereby immediately diminish the value of that information to the human was not mentioned. In addition, what guarantee would that person have that the information would not be used subsequently for identity theft? Needless to say, I am not a fan.]

The following three panels were populated significantly with attorneys and government agencies, including the U.S. Commodity Futures Trading Commission, the Federal Trade Commission, and the National Institute of Standards and Technology, as well as a few businesses. One standout from this bunch was Natalee Binkholder from the office of Representative Mick Mulvaney who recommended that there be regulatory restraint, such as “safe harbors,” for this new technology. Unsurprisingly, the agencies were concerned about cryptocurrency users entering into fraudulent transactions.

The keynote speaker was Stephen Taylor, DC Commissioner, Department of Insurance, Securities, and Banking. He envisioned a blockchain movement that would support economic inclusion for the unbanked, as well as an opportunity to enable microinsurance offerings.

The last panel was moderated by Joe Colangelo (Consumers’ Research) with panelists Yorke Rhodes III (Microsoft), Meeta Yadav (IBM), and Brian Hoffman (Open Bazaar). The panel talked about connecting legacy computer systems to the blockchain and how this could be accomplished. They also talked about how blockchain technology could reduce fraud because everyone along the blockchain would have an identical copy of the transaction terms and conditions. All of the panelists expressed their dedication to blockchain’s future and appeared genuinely excited. [Author’s note: If any combination of legacy systems and future technology innovators could devise a worldwide infrastructure enabling the best each could offer, these folks could. Indeed, Mr. Rhodes is dedicating eight years to the blockchain project.] The optimism and excitement on the panel created a perfect end to the conference.

Further reading:

The Blockchain Revolution by Don Tapscott and Alex Tapscott, 2016

The Age of Cryptocurrency: How Bitcoin and Digital Money Are Challenging the Global Economic Order by Paul Vigna and Michael J. Casey, 2015

Cynthia M. Gayton is an attorney, educator, and speaker.  Contact her at cynthia.gayton@gayton-law.com.  See the Contributors page for more about Ms. Gayton.  Nothing in this post is purported to be legal advice.

The Enduring Legacy of Henry Manne: A Review of the 2016 Law & Economics Center Conference by Theodore A. Gebhard — January 26, 2016

The Enduring Legacy of Henry Manne: A Review of the 2016 Law & Economics Center Conference by Theodore A. Gebhard

On Friday, January 22, I attended the Fifth Annual Henry G. Manne Law & Economics Conference, which was sponsored by the Law & Economics Center at George Mason University.  The Conference was held in conjunction with the Twelfth Annual Symposium of the Journal of Law, Economics, & Policy, a publication of the GMU School of Law.  This year’s Conference was entitled, “The Enduring Legacy of Henry G. Manne,” and featured three panels of academic experts, all of whose research draws substantially on the work of the late Henry Manne.  Manne was the long time Dean of the Law School, and a trailblazing scholar of corporate governance and corporate finance.

Regrettably, owing to inclement weather, the day’s program had to be truncated, including dropping the scheduled Key Note luncheon speech by former Securities and Exchange Commission commissioner, Kathleen Casey.  Even under the shortened time frame, however, the panel discussions were thorough and highly informative.

Panel 1:

The first panel focused on Manne’s seminal 1965 Journal of Political Economy article, “Mergers and the Market for Corporate Control.”  Speakers included GMU business professor, Bernard Sharfman, and University of Chicago Law professor, Todd Henderson.

In the JPE article, Manne developed the then-novel insight that when a corporation is afflicted with inefficiencies owing to poor management, an incentive is created for others to take control of the corporation, eliminate the managerial inefficiencies, and be rewarded with an increase in share price.  What this means is that there is a functioning market for corporate control.

Drawing on this insight, Professor Sharfman considered whether activist investors might be able to perform the same function, specifically activist hedge funds.  One key difference between activist investors and take-over investors is that the former typically are not able to obtain a controlling interest in a corporation.  Although the interest can be significant, it falls short of the authority to dictate managerial changes.  Therefore, when activist investors see managerial inefficiencies, they must rely principally on persuasion to influence corrective action.

Corporate boards, however, often, if not most of the time, resist this activism.  In some instances, the boards might go so far as to sue in court for relief.  When this occurs, the courts are bound by the “business judgment rule,” which provides for deference to the decisions of corporate boards.  Sharfman contends that, although the business judgement rule is based on solid grounds and usually works well as a legal rule, it fails under the circumstances just described, i.e., when there are managerial inefficiencies but activist investors are unable to obtain a controlling interest in the company.  Sharfman concludes, therefore, that it might be time for the courts to carve out, albeit carefully, an exception to the business judgment rule in cases where the evidence points to no plausible business reason to reject the activists’ position.  In this circumstance, a court can find that the board’s resistance likely owes to no more than an attempt to protect an entrenched management.

Building on Manne’s insight of the existence of a “market” for corporate control, Professor Henderson considered the possibility of such diverse hypothetical markets as (1) markets for corporate board services, (2) markets for paternalism and altruism, and (3) markets for trust.  In the first instance, Henderson posited the possibility that shareholders simply contract out board services rather than having a board solely dedicated to one company.   So, for example, persons with requisite expertise could organize into select board-size groups and compete with other such groups to offer board services to the shareholders of any number of separate corporations.

In the second instance, Henderson, noting the growing modern viewpoint that companies have paternalistic obligations toward stakeholders that go beyond shareholder interests, suggested that the emergence of a competitive market to meet such obligations would likely be superior to relying on evolving government mandates.  Competitive markets, for example, would avoid delivering “one size fits all” services and, in so doing, be better able to delineate beneficiary groups on the basis of their specific needs, i.e., needs common within a group but diverse across groups.  Inefficient cross-subsidization could thus be mitigated.  In this same vein, Henderson suggested the possibility of a market for the delivery of altruistic services.  He noted that the public is increasingly demanding that corporations, governments, and non-profits engage in activities deemed to be socially desirable.  As with the provision of paternalism, competitively supplied altruism whereby companies, governments, and non-profits comprise the incumbent players would yield the positive attributes of competition.  These would include the emergence of alternative mixes of altruistic services tailored to the specific needs of beneficiaries and efficient, low cost production and delivery of those services.

In the third instance, Henderson posited the idea of a market for trust.  Here he offered the example of the ride-sharing company, Uber.  Henderson suggested that Uber not only competes with traditional taxis, but, perhaps more importantly, competes with local taxi commissions.  Taxi commissions exist to assure the riding public that it will be safe when hiring a taxi.  Toward that end, taxis are typically required to have a picture of the driver and an identifying number on display, be in a well maintained condition, and have certain other safety features.  All of these things are intended to generate a level of trust that a ride will be safe and uneventful.  According to Henderson, Uber’s challenge is to secure a similar level of trust among its potential customers.  New companies shaking up other traditional service industries face the same challenge.  Henderson concludes, therefore, that these situations open up entrepreneurial opportunities to supply “trust.”  Although Henderson did not use the example of UL certification, that analogy came to mind.  So, for example, there might be a private UL-type entity in the business of certifying that ride sharing (or any other new service company) is trustworthy.

In commenting on Panel 1, Bruce Kobayashi, a GMU law professor and former Justice Department antitrust economist, offered one of the more interesting observations of the day.  Professor Kobayashi reminded the audience that Manne’s concern in his JPE article was principally directed at antitrust enforcement, not corporate law.  In particular, Manne argued that the elimination of managerial inefficiency should rightly be counted as a favorable factor in an antitrust analysis of a merger.  In fact, however, although the DOJ/FTC Horizontal Merger Guidelines allow for cognizable, merger-specific efficiencies to be incorporated into the analysis of net competitive effects, the agencies historically only consider production and distribution cost savings, not the likelihood of gains to be had from jettisoning bad management.  Kobayashi suggested that this gap in the analysis may be due simply to the compartmentalization of economists among individual specialties.  For example, in his experience, he rarely sees industrial organization (antitrust) economists interacting professionally with economists who study corporate governance.  Thus, because Manne’s article, through the years, has come to be classified (incorrectly) as solely a corporate law article, its insights have unjustifiably escaped the attention of antitrust enforcers.

Panel 2:

Panel 2 focused on Henry Manne’s seminal insights about “insider trading.”  Manne was the first to observe that, notwithstanding the instinctive negative reaction of many, if not most, people to insider trading (“It’s just not right!”), the practice actually has beneficial effects in terms of economic efficiency.  By more quickly incorporating new information about a business’s prospects into share price, insider trading can accelerate the movement of that price toward a market clearing level, which signals a truer value of a company and thus enhances allocative efficiency in capital flows.

The two principal speakers on Panel 2 were Kenneth Rosen, a law professor at the University of Alabama and John Anderson, a professor at the Mississippi College School of Law.  I will limit my comments to Anderson.

In addition to being a lawyer, Professor Anderson is a philosopher by training, holding a Ph.D. in that subject.  He began his presentation by noting that ethical claims are often vague in a way that economic analysis, given its focus on efficiency, is not.  Although in any empirical study, there can be problems with finding good data and there can be measurement difficulties, the analytical framework of economics rests on an objective standard.  Either a given behavior increases efficiency, reduces efficiency, or is benign toward efficiency.  Anderson also observed that ethics merely sets goals, while economic analysis determines the best (i.e., most efficient) means to achieve those goals.

Putting these distinctions into the context of insider trading, Anderson finds that insider trading laws and enforcement of those laws are likely overreaching.  The current statutory scheme rests largely on ethical notions, namely the issue of unfairness (“It’s just not right!”).   As such, it rests on vague standards.  The result is a loss of economic efficiency and costly over-compliance with the laws.  In the end, not only are shareholders hurt (e.g., by costly compliance and litigation), but the larger investing public is also harmed owing to slower price adjustments.  Anderson made the point that, although acts of greed may be bad for individual character, such acts are not necessarily bad for the entire community.

In concluding his presentation, Anderson proposed that some form of licensing of insider trading might best accommodate the competing ethical and efficiency goals.  Under a licensing system, insider trading could be permissible in certain circumstances, but under full transparency.

Panel 3:

Panel 3 considered the effects of required disclosures under federal security laws.  The two principal speakers were Houman Shadab of New York Law School and Brian Mannix of George Washington University.

Despite being a novice in the subject matter of the day, I felt that I followed the discussions of the first two panels reasonably well.  Panel 3, however, was difficult for me, as the speakers made frequent references to statutory provisions and SEC interpretations of relevant security law, with which I have no familiarity.  Nonetheless, a portion of the discussion intrigued me.

In particular, Professor Mannix discussed the issue of high frequency trading (HFT), a hot topic just now because of Michael Lewis’s recent book, Flash Boys.  As I understand it, computer technology makes it possible for trades to take place within microseconds.  With evermore sophisticated algorithms, traders can attempt to beat each other to the punch and arbitrage gains even at incredibly small price differences.

Of particular interest to regulators is the likelihood of a tradeoff inherent in HFT that determines net efficiency effects.  On the one hand, HFT has the potential to enhance efficiency by accelerating the movement of a share price to its equilibrium.  Given that this movement is tiny and occurs within a microsecond, however, this efficiency gain may not be significant.  Indeed, the efficiency gains, if any, are likely to be very slight.

On the other hand, a lot of costly effort goes into developing and deploying HFT algorithms.  Yet, much of the payoff may be no more than a rearrangement of the way the arbitrage pie is sliced rather than any enlargement of the pie.  If so, the costs incurred to win a larger slice of the pie, costs without attendant social wealth creation, likely exceed any efficiency gain.  It may make sense then for regulators to create some impediments to HFT.  Toward this end, Mannix offered some possible ways to make HFT less desirable to its practitioners.  The most intriguing was to put into place technology that would randomly disrupt HFT trades.  Key to profiting substantially from HFT is the need to trade in very large share volumes because the price differences over which the arbitrage takes place are so small.  Random disruptions would make such big bets riskier.*

Final Thoughts:

All in all, I found each of panels to be highly informative.  The cast of speakers was well selected, and each presentation was well made.  Significantly, each panel did a very good job of tying its discussion to Henry Manne’s work and influence.  In addition to learning a great deal about current hot issues in corporate law and how economic analysis informs those issues, conference attendees surely left with an even higher appreciation of Henry Manne.

On the logistical front, in light of last minute adjustments necessitated by weather conditions, the organizers pulled off the conference flawlessly.  A high standard was set that will be difficult for the organizers of next year’s offering to surpass.

Finally, on a personal note, Henry Manne was my law school Dean at GMU and also a neighbor for many years in my condominium.  I remember Henry most, however, because of the dramatic impact that his unique curriculum at GMU had on my intellectual development.  That curriculum, which emphasized the application of economic analysis to the law, literally changed the way I think about the world.  For this reason, I was especially pleased that each of the speakers at the Conference took time to comment on the influence that Manne had on them.  Some had never met Manne in person, but nonetheless were deeply influenced by his scholarly work.  Others who knew Manne more intimately related some wonderful anecdotes.  Those alone would have made the day worthwhile.

Notes:

* For a further and more detailed explanation of the tradeoffs inherent in HFT, readers are encouraged to see a blog post by Todd Zywicki, the Executive Director of the LEC, which appeared here.

Theodore A. Gebhard is  a law & economics consultant residing in Arlington, Virginia.  See the Contributors page for more about Mr. Gebhard.  Contact him at theodore.gebhard@aol.com.  For more about Henry Manne, see the several tributes to him upon his passing on David Henderson’s blog, including one by Mr. Gebhard.

Revenue Generation from Law Enforcement—Unintended Consequences? by Stefan N. Hoffer — January 24, 2016

Revenue Generation from Law Enforcement—Unintended Consequences? by Stefan N. Hoffer

Should law enforcement be conducted with an objective of maximizing public revenues? Should police departments be allowed to retain revenues they generate through fines and forfeitures?  An argument might be advanced that pursuing an objective of revenue generation will encourage law enforcement to enforce the law more effectively, particularly if revenues generated are returned to law enforcement agencies.  This essay argues that such policies may have surprising and unintended results in many common, everyday situations.

From time to time there are reports in the media of law enforcement seizing property using asset forfeiture laws. The crimes involved are typically serious, and convicted violators are subject to severe penalties including large fines. These actions occur in situations where it is difficult and expensive to identify offenders correctly given the large pool of un-identified offenders.  Because of the difficulty and expense involved, pursuing an objective of maximizing revenues and then recycling them back into enforcement activity would create incentives for enhanced enforcement efforts against a seemingly inexhaustible pool of offenders.

Although this result may be true under circumstances where the pool of offenders remains large, it would be too easy to conclude that attempting to maximize revenues will universally result in improved enforcement. Instead, let us consider situations where it is easy to identify and fine most offenders correctly.  In such situations, it may well be that pursuing a revenue objective can lead to routine, systematic under enforcement of everyday laws—for example, traffic regulations.

To see how this might occur, consider a nobleman who is the owner of a hunting preserve. The nobleman knows that to ensure a perpetual supply of game, he must not over-hunt the preserve.  If he does, the volume of game will decline and in the extreme become non-existent.  The “wise” nobleman will only engage in limited hunting so as to ensure a continuous stream of game in perpetuity.

By analogy, a traffic intersection controlled by signals or a road with high occupancy vehicle (HOV) lanes can be thought of as a hunting preserve. If law enforcement seeks to eliminate red light running or HOV violators, it can do so by rigorous enforcement—the violators are hunted to extinction.  Once the consideration of revenue generation is introduced, however, there becomes a strong incentive to under-enforce the law.  The rationale is simple:  if you enforce to the extent that there are few violators, there will be little revenue.

More specifically, if revenue maximization becomes a significant consideration, there will be a level of enforcement short of complete enforcement that will maximize revenue. Enforcing more rigorously, say more days per month, will, other things constant, generate more revenue.  But, other things are not constant.  As the number of enforcement days grows, the number of violators on any one day to be caught will decline—people will learn that if they commit violations they will likely be caught.  At some point, the decline in violators will just offset the gain from enforcing one more day per month.

Casual observation lends support for the notion that revenue maximization can lead to under enforcement:

Interstate 66 HOV Lanes Inside the Washington, D.C. Beltway.

The segment of I66 between Washington D.C. and the beltway around the city has HOV lanes. Vehicles using these lanes during commuting hours are required to have a minimum number of occupants.  Those that do not are subject to a fine.  Because access to I66 is controlled, enforcement is straight forward.  Police stationed at the exits can easily stop cars that do not have the minimum number of riders and issue tickets—and they do, but not on all days or even many days.  As a consequence, violators are numerous and road congestion and delay more extensive that it would be if most violators were eliminated.  This raises the question of why is there not more aggressive enforcement?  A typical governmental response is that limited resources do not allow it.  But police departments, like other government service providers, do not staff for the mean demand for their services, but rather for a certain percentile of the peak.*  This strategy allows the police to meet most, but not all, randomly occurring emergencies.  It also means that most of the time the police department will have excess staff on duty.  So, why cannot these extra staff be utilized, absent an emergency, for duties such as enforcing HOV regulations?  As suggested, a reason consistent with observed behavior is that more revenue will be generated by enforcing such regulations only sporadically, thus ensuring that there are plenty of violators to be fined when enforcement does take place.

Red Light Cameras in Central Florida.

Several small and medium-sized cities in central Florida have installed “red light cameras” to ticket motorists that run red lights. The cameras are typically provided by a private contractor who installs and operates them in cooperation with law enforcement.  The cities share the revenue that the cameras generate with the private contractor, subject to a minimum guarantee that the cities pay the contractor irrespective of revenues collected.  Initially these cameras made most jurisdictions significant amounts of money.  But, over a period of time, they also drastically reduced the number of red light runners.  Revenues from the cameras precipitously declined, in some cases to the point that some local jurisdictions were losing money on them after meeting their contractual guarantee to the commercial provider.  As a result, some jurisdictions have not renewed their contracts and are removing the cameras, sometimes even after the private contractor has offered to reduce substantially the minimum payment guarantee. Other jurisdictions have accepted the idea that the cameras improve public safety and that this improved safety comes at a cost—what they must pay the private contractor over and above revenues generated so as to meet the payment guarantees.

As outlined above, a revenue maximizing approach to law enforcement can have unintended and surprising consequences. In situations where it is difficult to identify and catch offenders, it can provide resources and incentives for aggressive enforcement.  In other situations, where potential offenders can be easily and accurately identified and caught, focusing on revenue maximization has the potential to lead to deliberate, significant under enforcement.

*  The Federal Aviation Administration, for example, uses this model for staffing air traffic controllers.  See the FAA’s document, A Plan for the Future: 10-Year Strategy for the Air Traffic Control Workforce, at page 20, available here.

Stefan N. Hoffer is a transportation economist, formerly with the Federal Aviation Administration. His areas of specialization include benefit-cost analysis and valuation of non-market traded items.  See the Contributors page for more about Mr. Hoffer.  Contact him at snhoffer@aol.com.

 

Kelo v. City of New London after Ten Years — Book Review by Joel C. Mandelman — November 6, 2015

Kelo v. City of New London after Ten Years — Book Review by Joel C. Mandelman

The Grasping Hand: Kelo v. City of New London & the Limits of Eminent Domain by Ilya Somin, University of Chicago Press – 2015

In his book, The Grasping Hand: New London and the Limits of Eminent Domain, Ilya Somin, a law professor at George Mason University Law School, traces the history of eminent domain and 5th Amendment takings from colonial times to the 2005 Supreme Court decision in Kelo v City of New London.  This is not a minor intellectual topic. The protection of the rights of private property owners is a cornerstone of any free country and of any free market economy.  In the United States, the 5th Amendment to the Constitution provides that, “private property shall [not] be taken for public use without payment of just compensation” (emphasis added.)  Notwithstanding these limiting words, the U.S. Supreme Court has sanctioned the government (primarily local and state governments) taking of private property and its transfer to other private parties whose use of it may be of little benefit to the general public.

The most recent example of this disfigurement of the Constitution was found in New London, Connecticut where, in 2005, the Court sanctioned the seizure of several private homes so that New London could transfer the property to the Pfizer pharmaceutical corporation in order to build a new corporate headquarters, a luxury hotel, and a corporate conference center.  The principal authority for this seizure was a 1954 Supreme Court ruling in Berman v Parker in which the Court held that a seizure need not be for a traditional public purpose, i.e. a school, a hospital, or a highway, but that the seizure merely have a “public purpose” such as producing greater tax revenues by ending “urban blight”.  (Whether the seized property was, in fact, blighted is left to the judicially unreviewed discretion of the government agency seizing the property.)  The saddest irony of the Kelo case was that, after all of the litigation, and the loss of scores of homes, Pfizer changed its mind and never built the planned corporate park and hotel.

The vast majority of seizures — since the Berman decision more than 80 percent of all private property condemnations — have been for such non-public uses such as building sports arenas, corporate office space, and general urban renewal and not for traditional public uses such as building schools or highways.  The Supreme Court’s rationalization has always been that since “everyone benefits” from a neighborhood being generally improved, there is an inherent “public purpose” to the seizure. Thus, even if the seizure was not for a traditional public use, it is sanctioned by the 5th Amendment.  That this is not what the plain language of the 5th Amendment states seems to have escaped the Court’s notice.

As Somin discusses, courts have been reluctant to second guess what a state legislature or a city council thinks is “blight” requiring government action to eliminate it.  This judicial reluctance is a moral and constitutional cop-out. There has been no reluctance on the part of federal or state judges to second guess the legitimacy of search warrants, confessions, the providing of a fair trial (i.e. due process of law), the scope of government limitations on freedom of speech or the press, the scope of the right to bear arms or the meaning of the 14th Amendment’s equal protection and due process clauses; so why the reluctance to judge the validity of takings under the 5th Amendment? No explanation is offered, nor have judges ever tried to explain their sudden judicial restraint in this particular area of constitutional law.

Somin traces the history of 5th Amendment takings dating back to post-colonial times when many takings were made for the purpose of building privately owned dams that had a generalized public benefit by creating water power or the construction of privately owned turnpikes.  Many of the more controversial public benefit takings did not start until the New Deal, or thereafter, when urban renewal became all the rage. The trend became the seizure of private property – which typically had housing already on it – in order to use it for the building of massive public housing projects. That many of those public housing projects later became worse slums than the smaller, privately owned “slums” that they “renewed” is discussed at some length in the book. Sadly, no government official, or agency, has ever been held accountable for these widespread, often well publicized, failures. As Somin discusses, much of the impetus for these renewal projects came not from elected legislators or elected executive branch officials but rather from real estate and construction industry developers who were also major sources of campaign funds.

Perhaps the most outrageous example of crony-capitalism seizures of private property was the infamous Poletown case, in which the City of Detroit seized thousands of private homes so that General Motors could build an automobile factory.  The Michigan Supreme Court sanctioned this theft on the grounds that the promised creation of 5,000 new jobs was a public use justifying the seizure of thousands of private homes and businesses. The cruelest irony was that fewer than half of the promised jobs were ever created and, several years later, a newly constituted Michigan Supreme Court partially reversed its Poletown decision in County of Wayne v. Hathcock, allowing takings in “blighted” areas but prohibiting takings for economic development.

The public reaction to the U.S. Supreme Court’s 2005 Kelo decision was swift and harsh. It was widely denounced as a threat to all private property rights. After all, if Kelo were followed to its logical conclusion, there would be nothing to stop the government from seizing 100 private homes so that a private developer could construct a high rise apartment that would pay more in local property taxes.

The problem is that many of the legislative attempts to prevent another Kelo-like case from ever happening again were half-hearted and possibly made in bad faith.  Although state laws were changed to bar 5th Amendment takings for economic development, a gaping – and likely intentional – loophole was left in those statues. There was no prohibition of takings to end undefined, judicially unreviewable, allegations of economic or social “blight.”  Almost any taking barred on economic development grounds could still be “justified” on the grounds that the affected property was “blighted.”

Somin carefully traces and analyzes both the history of the takings clause and the development of the “public purpose versus public use” expansion of its scope to the point where no private property is truly safe from any government bureaucrat or private developer with enough political clout to get its way.  Many of the state laws passed in response to the Kelo decision need to be substantially strengthened and federal law needs to be rewritten to bar illegitimate seizures of private property for other private, or quasi-private uses that primarily benefit the political party controlling the local government and its crony-capitalist allies.

This is an important book that, because of its arcane Constitutional premise, has not received the widespread publicity that it deserves. The Kelo decision and the weak responses to it by many state legislatures have left many citizens with a false sense of security.  Eternal vigilance truly is the price of liberty and greater vigilance, and efforts, are required to prevent Kelo from rearing its ugly, if somewhat shrunken, head again.

Joel C. Mandelman is an attorney practicing in Arlington, Virginia.  He has filed amicus briefs with the U.S. Supreme Court on behalf of Abigail Fisher in her challenge to the University of Texas’ racially preferential admissions policies and on behalf of the State of Michigan in defense of its state constitutional amendment barring all racial preferences in college admissions, government hiring and government contracting.  See the Contributors page for more about Mr. Mandelman.  Email him at joelcm1947@gmail.com.

(Correction, Nov. 8:  An earlier version of this post misstated the holding in Hathcock as fully reversing the Poletown case. Ed.)

(Correction, Nov. 20:  In an email to this site’s administrator, Professor Somin points out that “Pfizer was not going to be the new owner or developer of the condemned property.  As explained in the book, they lobbied for the project and hoped to benefit from it, but were not going to own or develop the land themselves.” Ed.)

“Even in a Knowledge-Driven Economy — Things Are Still Kings” by Cynthia M. Gayton, Esq. — October 12, 2015

“Even in a Knowledge-Driven Economy — Things Are Still Kings” by Cynthia M. Gayton, Esq.

From September 30 – October 1 of this year, I attended a conference entitled “The IP Platform: Supporting Inspiration and Innovation” that was sponsored by George Mason University School of Law’s Center for the Protection of Intellectual Property.  The extensive and impressive speaker’s list included a keynote speech by David Kappos, law professors from around the country, and innovators of all stripes.

The several panel discussions addressed intellectual property protection in general, and patent protection in particular. There was no dissent amongst the lawyers, professors, or creators regarding the economic engine that is intellectual property.  Having attended similar conferences over the years, I nodded absentmindedly to most of the comments about IP enforcement, economic incentives, piracy, and general bad behavior.

Upon reflection, two things stood out.

One, Morgan Reed, Executive Director of ACT The App Association, talked about the commodification of intellectual property.  He pointed out that once someone has bought a smartphone, any additional purchases related to that smartphone/mobile market were apps.  The device itself was an app delivery system.  For small innovative companies, he stressed the importance of a) creating a trademark/brand, b) services and support related to the brand, and c) employment agreements.  These practical business development pillars had not been articulated so clearly in other presentations.

Two, it appears that in order to make significant returns on any investment into IP or an IP driven company, a physical manifestation of the IP was crucial.  To prove my point, I want to talk about two speakers who were also creators – Marc Beeson a staff songwriter for Downtown Music Publishing and Garrett Brown the Oscar-winning Inventor of the Steadicam® Camera Stabilizer.  Both are in the entertainment business. Both created the products which return the money on which both live. One, Marc Beeson, is experiencing a severe decline in the returns on investment in the music industry, while Garrett Brown continues to receive royalties derived from the original patent for his camera stabilizer.

Professor Sean O’Connor of the University of Washington School of Law discussed and demonstrated the patent fueled history of the electric guitar, starting from George Beauchamp’s patent and the story of Charlie Christian.

Guitar

Musicians, composers, publishers and affiliated industries lament the decline in music sales. Music streaming has increased, but returns to the composers and musicians have been abysmal.  According to Rolling Stone magazine, only 257 million albums were sold in any format in 2014, a drop of 11% from 2013.

The format that has been bucking the trend is vinyl records. In 2014, vinyl sold 9.2 million units.  By way of comparison, according to MBW analysis/Neilson Soundscan, in 2007, only 205,000 albums were sold.

The Kappos keynote contained a clue about what might explain the trend toward the physicality component of IP – its ecosystem.  Mr. Kappos quoted Brad Smith, the CEO of Intuit and said that functionality is not enough – emotion has to be built into the product.

According to an article entitled, “Here’s Why Music Lovers are Turning to Vinyl and Dropping Digital” by Meg Gibson on Time.com, what might be pushing this phenomenon is consumerism – people like others to know what they have and displaying ownership in public is a way of signaling identity.  According to Nik Pollinger who was quoted in the article “Making our taste in music visible has historically played an important role in such signalling for many people.”  This wasn’t discussed during the conference, but is worth considering as more and more arguments are being made to convert analog entertainment of all formats into digital equivalents.

Does the music delivery system, i.e., digital streaming, inhibit the emotions related to the music product?  Specifically, what do vinyl albums do that streaming does not?  Putting the digital genie back into an analog bottle may not be the answer, but in order for people to continue to invest in what appears to be an ephemeral industry, embedding the IP in a currently unknown or untested physical format may save the industry.

Stephen Haber, a professor at Stanford University and on the final panel, Innovation in IP Markets, addressed the positive trending relationship between national GDP and the strength of its IP rights. Ben Sheffner, VP of Legal Affairs at the Motion Picture Association of America said that any successful market economy must have 1) private property rights; 2) freedom of contract; and 3) rule of law to enforce these rights.

As many of us know, a good conference is one you leave with more questions than answers.  I think the attendees will have more than enough to think about until 2016.

Cynthia M. Gayton is an attorney practicing in Arlington, Virginia.  She is also an adjunct professor of engineering law at The George Washington University School of Engineering and Applied Sciences.  Her Linked-In page is here, and information about her law firm can be found here.  Ms. Gayton can be reached at cynthia_gayton@gayton-law.com.

“What Does King v. Burwell Have to Do with the Antitrust Rule of Reason? A Lot” by Theodore A. Gebhard — July 15, 2015

“What Does King v. Burwell Have to Do with the Antitrust Rule of Reason? A Lot” by Theodore A. Gebhard

The first Justice John Marshall Harlan is probably best remembered for being the sole dissenter in Plessy v. Ferguson, the notorious 1896 Supreme Court decision that found Louisiana’s policy of “separate but equal” accommodations for blacks and whites to satisfy the equal protection requirements of the 14th Amendment.  Harlan, a strict textualist, saw no color distinctions in the plain language of the 14th Amendment or anywhere else in what he described as a color-blind Constitution.  Harlan’s textualism did not end there, however.  It was also evident fifteen years later in one of the most famous and impactful antitrust cases in Supreme Court history, Standard Oil Co. of New Jersey v. U.S. The majority opinion in that case, in important respects, mirrored Chief Justice John Roberts’ reasoning in King v. Burwell.  Like King, the majority opinion in Standard Oil was written by the Chief Justice, Edward White in this instance, and in both cases, the majority reasoned that Congress did not actually mean what the clear and plain words of the statute at issue said.  Although concurring in the narrow holding of liability, Justice Harlan in Standard Oil, as Justice Antonin Scalia in his dissent in King, criticized forcefully what he believed to be the majority’s rank display of judicial legislation and usurpation of Congress’s function to fix statutes that may otherwise have harsh policy consequences.  Indeed, Standard Oil demonstrates that both Chief Justice Roberts and Justice Scalia had ample precedent in Supreme Court history.

The Standard Oil case was about whether John D. Rockefeller’s corporate empire violated the Sherman Antitrust Act, enacted 21 years earlier in 1890 and which prohibited monopolization, attempted monopolization, and “every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce .”  Harlan believed that under the facts of the case, liability could be found within the plain language of the statute.  The majority likewise found that Standard Oil violated the Act, but did so by dint of construing the Act in a way that the Court had previously rejected on several occasions.  Specifically, Chief Justice White used the opportunity to read into the Sherman Act the common law principle of “reasonableness” such that only “unreasonable” restraints of trade would be illegal.  That is, the Court rewrote the statute to say in effect, “every unreasonable contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade” is prohibited. In so doing, the Court, by judicial fiat, discarded the plain language of the statute and injected the so-called “rule of reason” into antitrust doctrine.

Notwithstanding that the Court had previously found otherwise, Chief Justice White found that the 51rst Congress must have had in mind the common law focus on unreasonable restraints in trade when it drafted the Sherman Act.  Otherwise, he believed, the operation of the statute could give discordant results.  The fact that the Congress did not make this qualification explicit was of no matter; White’s clairvoyance was sufficient to discern and correct the textual oversight and Congress’s true intent.  Harlan, however, saw this as unwarranted judicial activism and harmful appropriation of the “constitutional functions of the legislative branches of government.”  Echoing today’s concerns about judicial overreach, Harlan worried that this constitutionally unauthorized usurpation of legislative power “may well cause some alarm for the integrity of our institutions.”

Moreover, in his long and detailed concurrence, Harlan forcefully argued that it is not the Court’s function to change the plain meaning of statutes, whether or not that meaning reflects actual legislative intent.  That is, a judge’s role is to look only at the four corners of a statute, and no more.  It is up to the legislature to fix a statute, if necessary, not the judge.  This principle was even more applicable in the case at hand.  Here, Harlan believed, the plain language of the Act did in fact reflect the actual legislative intent.  Thus, the majority’s contrary position was even more egregious.  That is, the majority simply substituted its preferred reading of the statute 21 years after the fact, notwithstanding contrary contemporaneous evidence.

In this regard, Harlan pointed out that in 1890 the Congress was especially alarmed about growing concentrations of wealth, aggregation of capital among a few individuals, and economic power, all arising from the rapid industrialization that the United States had been experiencing over the previous decades.  Congress, in keeping with the spirit of the age, saw this changing economic climate as requiring bold new law focused on checking the power of trusts.  Specifically, the new climate “must be met firmly and by such statutory regulations as would adequately protect the people against oppression and wrong.”  For this reason, the 1890 Congress, in drafting the Sherman Act, intentionally abandoned common law principles as being too weak to deal with the economic circumstances of the day.  In addition, the Congress wrote criminal sanctions and third-party rights of action into the Act, none of which were a part of the common law.

Finally, Harlan pointedly explained that the Court had itself previously found in a well-known 1896 decision, U.S. v. Trans-Missouri Freight Assn., and reaffirmed in several later decisions that the Act’s prohibitions were not limited only to unreasonable restraints of trade, as that term is understood in the common law.  The first of these decisions, moreover, was based on far greater proximity to the time of the Act than the current case, and if the Congress thought the Court to be wrong, it had at least 15 years to correct the Court on this issue, but failed to do so, indicating that it approved of the Court’s construction.  Harlan thus saw White’s reversal of these holdings as no more than “an invasion by the judiciary of the constitutional domain of Congress — an attempt by interpretation to soften or modify what some regard as a harsh public policy.”

The activism of Chief Justice White in Standard Oil and nearly all of Justice Harlan’s concerns re-emerge in King v. Burwell.  In King, the principal issue was whether, under the Affordable Care and Patient Protection Act, an “Exchange” (an insurance marketplace) established by the federal government through the Secretary of Health and Human Services should be treated as an “Exchange” established by a state.  The question is important because under the ACAPA, an insurance exchange must be established in each state.  The statute provides, however, that if a state fails to establish such an exchange, the Secretary of H.H.S. will step in and establish a federally run exchange in that state.  The statute further provides that premium assistance will be available to lower income individuals to subsidize their purchase of health insurance when such insurance is purchased through an “Exchange established by the State.”  The Act defines “State” to mean each of the 50 United States plus the District of Columbia.  The plain language of the statute therefore precludes premium assistance to individuals purchasing health insurance on a federally run exchange.

Notwithstanding the plain language of the Act, however, Chief Justice Roberts, writing for the majority, held that premium assistance is available irrespective of whether the relevant exchange was established by a state or the Secretary.  In effect, the Chief Justice rewrote the pertinent clause, “Exchange established by the State,” to read instead “Exchange established by the State or the Federal Government.”

Much like Chief Justice White more than a century earlier, Chief Justice Roberts reasoned that the Congress could not have actually meant what the plain text of the Act said and that if this drafting oversight were not corrected by the Court, serious discordant consequences would result.  Also, like his predecessor, Chief Justice Roberts came to this conclusion despite evidence suggesting that the plain language is exactly what Congress intended.  According to the now public remarks of Jonathan Gruber, a chief architect of the Act, by limiting premium assistance only to purchases made on state-established exchanges, the Congress intended to create an incentive for each state to establish an exchange.  Even so, the Chief Justice discerned otherwise (perhaps because in hindsight the incentive did not work and, as a result, the consequences to the operation of the Act will be severe) and held that Congress must have intended “Exchange” for purposes of premium assistance to encompass both state and federal-established exchanges.  That is, just as Chief Justice White found, 21 years after its passage, that the plain text of the Sherman Act did not contain the full intended meaning of the words in the Act, Chief Justice Roberts similarly found the plain text of the ACAPA to fall short of its true meaning, notwithstanding that Congress did nothing to change the text since its 2010 enactment.

The parallel between the two cases does not stop with the majority opinions.  In King, Justice Scalia, a textualist like Justice Harlan, echoed the same concerns that Harlan had in Standard Oil. In his dissent, Scalia states, for example, that [t]he Court’s decision reflects the philosophy that judges should endure whatever interpretive distortions it takes in order to correct a supposed flaw in the statutory machin­ery.  That philosophy ignores the American people’s deci­sion to give Congress all legislative Powers enumerated in the Constitution. … We lack the prerogative to repair laws that do not work out in practice. … ‘If Congress enacted into law something different from what it intended, then it should amend the statute to conform to its intent.’”  That is, it is not up to the Court to usurp the legislative functions of Congress in order to fix the unintended consequences of a statute. Scalia goes on, “‘this Court has no roving license to disregard clear language simply on the view that Congress must have intended something broader.’”  Scalia concludes by suggesting that, to the detriment of “honest jurisprudence,” the majority “is prepared to do whatever it takes to uphold and assist [the laws it favors].”

So we can only conclude that the controversy surrounding Chief Justice Robert’s reasoning in King is anything but new.  Textualists have been sounding alarms about judicial overreach for decades.  Whether or not one believes that Chief Justice Roberts assumed a proper judicial role, it is undeniable that he had precedent for doing what he did.  Similarly, it is undeniable that Justice Scalia’s concerns are well grounded in Court history.  One other certainty is that just as the judicial creation of the “rule of reason” has had a significant impact on the administration of antitrust law in the last 100-plus years, Chief Justice Robert’s rewrite of the ACAPA will have a lasting impact, not only on the U.S. health insurance system, but in sustaining the self-authorized prerogatives of judges.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.

“Forecasting Trends in Highly Complex Systems: A Case for Humility” by Theodore A. Gebhard — June 20, 2015

“Forecasting Trends in Highly Complex Systems: A Case for Humility” by Theodore A. Gebhard

One can readily cite examples of gross inaccuracies in government macroeconomic forecasting.  Some of these inaccurate forecasts have been critical to policy formation that ultimately produced unintended and undesirable results.  (See, e.g., Professor Edward Lazear, “Government Forecasters Might as Well Use a Ouija Board,” Wall Street Journal, Oct. 16, 2014)  Likewise, the accuracy of forecasts of long-term global warming is coming under increasing scrutiny, at least among some climate scientists.  Second-looks are suggesting that climate science is anything but “settled.” (See, e.g., Dr. Steven Koonin, “Climate Science and Interpreting Very Complex Systems,” Wall Street Journal, Sept. 20, 2014)  Indeed, there are legitimate concerns about the ability to forecast directions in the macro-economy or long-term climate change reliably.  These concerns, in turn, argue for government officials, political leaders, and others to exercise a degree of humility when calling for urgent government action in either of these areas.  Without such humility, there is the risk of jumping into long-term policy commitments that may in the end prove to be substantially more costly than beneficial.

A common factor in macroeconomic and long-term climate forecasting is that both deal with highly complex systems.   When modeling such systems, attempts to capture all of the important variables believed to have a significant explanatory effect on the forecast prove to be incredibly difficult, if not entirely a fool’s errand.  Not only are there are many known candidates, there are likely many more unknown candidates.  In addition, specifying functional forms that accurately represent the relationships between the explanatory variables is similarly elusive.  Simple approximations based on theory are probably the best that can be achieved.  Failure to solve these problems — omitting important explanatory variables and incorrect functional forms – will seriously confound the statistical reliability of the estimated coefficients and, hence, any forecasts made from those estimates.

Inherent in macroeconomic forecasting is an additional complication.  Unlike models of the physical world where the data are insentient and relationships among variables are fixed in nature, computer models of the economy depend on data samples generated by motivated human action and relationships among variables that are anything but fixed over time.  Human beings have preferences, consumption patterns, and levels of risk acceptance that regularly change.  This constant change makes coefficient estimates derived from historical data prone to being highly unsound bases on which to forecast the future.  Moreover, there is little hope for improved reliability over time so long as human beings remain sentient actors.

By contrast, models of the physical world, such as climate science models, rely on unmotivated data and relationships among variables that are fixed in nature.  Unlike human beings, carbon dioxide molecules do not have changing tastes or preferences.  At least in principle, as climate science advances over time with better data quality, better identification of explanatory variables, and better understanding of the relationships among those variables, the forecasting accuracy of climate change models should improve.   Notwithstanding this promise, however, long-term climate forecasts remain problematic at present.  (See Koonin article linked above.)

Given the difficulty of modeling highly complex systems, it would seem that recent statements by some of our political, economic, and even religious leaders are overwrought.  President Obama and Pope Francis, for example, have claimed that climate change is among mankind’s most pressing problems.  (See here and here.)  They arrived at their views by dint of forecasts that predict significant climate change owing to human activity.  Each has urged that developed nations take dramatic steps to alter their energy mixes.  Similarly, the world’s central bankers, such as those at the Federal Reserve, the European Central Bank, the Bank of Japan, and the International Monetary Fund regularly claim that their historically aggressive policies in the aftermath of the 2008 financial crisis are well grounded in what their elaborate computer models generate and, hence, are necessary and proper for the times.  Therefore, any attempts to modify the independence of these institutions to pursue those policies should be resisted, notwithstanding that the final outcome of these historic and unprecedented policies is yet unknown.

It is simply not possible, however, to have much confidence in any of these claims.   The macroeconomic and climate systems are too complex to be captured well in any computer model, and forecasts derived from such models therefore are highly suspect.  At the least, a prudent level of humility and a considerable degree of caution are in order among government planners, certainly before they pursue policies that risk irreversible unintended, and potentially very costly, consequences.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.

“Is Economics a Science?” by Theodore A. Gebhard — May 15, 2015

“Is Economics a Science?” by Theodore A. Gebhard

The great 20th Century philosopher of science, Karl Popper, famously defined a scientific question as one that can be framed as a falsifiable hypothesis.  Economics cannot satisfy that criterion.  No matter the mathematical rigor and internal logic of any theoretical proposition in economics, empirically testing it by means of econometrics necessarily requires that the regression equations contain stochastic elements to account for the complexity that characterizes the real world economy.  Specifically, the stochastic component accounts for all of the innumerable unknown and unmeasurable factors that cannot be precisely identified but nonetheless influence the economic variable being studied or forecasted.

What this means is that economists need never concede that a theory is wrong when their predictions fail to materialize.  There is always the ready excuse that the erroneous predictions were the fault of “noise” in the data, i.e., the stochastic component, not the theory itself.  It is hardly surprising then that economic theories almost never die and, even if they lie dormant for a while, find new life whenever proponents see opportunities to resurrect their pet views.  Since the 2008 financial crisis, even Nobel Prize winners can be seen dueling over macroeconomic policy while drawing on theories long thought to be buried.

A further consequence of the inability to falsify an economic theory is that economics orthodoxy is likely to survive indefinitely irrespective of its inability to generate reliable predictions on a consistent basis.  As Thomas Kuhn, another notable 20th Century philosopher of science, observed, scientific orthodoxy periodically undergoes revolutionary change whenever a critical mass of real world phenomena can no longer be explained by that orthodoxy.  The old orthodoxy must give way, and a new orthodoxy emerges.  Physics, for example, has undergone several such periodic revolutions.

It is clear, however, that, because economists never have to admit error in their pet theories, economics is not subject to a Kuhnian revolution.  Although there is much reason to believe that such a revolution is well overdue in economics, graduate student training in core neoclassical theory persists and is likely to persist for the foreseeable future, notwithstanding its failure to predict the events of 2008.  There are simply too few internal pressures to change the established paradigm.

All of this is of little consequence if mainstream economists simply talk to one another or publish their econometric estimates in academic journals merely as a means to obtain promotion and tenure.  The problem, however, is that the cachet of a Nobel Prize in Economic Science and the illusion of scientific method permit practitioners to market their pet ideological values as the product of science and to insert themselves into policy-making as expert advisors.  Significantly in this regard, econometric modeling is no longer chiefly confined to generating macroeconomic forecasts.  Increasingly, econometric forecasts are used as inputs into microeconomic policy-making affecting specific markets or groups and even are introduced as evidence in courtrooms where specific individual litigants have much at stake.  However, most policy-makers — let alone judges, lawyers, and other lay consumers of those forecasts — are not well-equipped to evaluate their reliability or to assign appropriate weight to them.  This situation creates the risk that value-laden theories and unreliable econometric predictions play a larger role in microeconomic policy-making, just as in macroeconomic policy-making, than can be justified by purported “scientific” foundation.

To be sure, economic theories can be immensely valuable in focusing one’s thinking about the economic world.  As Friedrich Hayek taught us, however, although good economics can say a lot about tendencies among economic variables (an important achievement), economics cannot do much more.  As such, the naive pursuit of precision by means of econometric modeling —  especially as applied to public policy — is fraught with danger and can only deepen well-deserved public skepticism about economists and economics.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.

“Economics and Transparency in Antitrust Policy” By Theodore A. Gebhard — April 28, 2015

“Economics and Transparency in Antitrust Policy” By Theodore A. Gebhard

A significant turning point in antitrust thinking began in the mid-1970s with the formal integration of microeconomic analysis into both antitrust policy and antitrust litigation.  At that time, the Department of Justice and the Federal Trade Commission dramatically expanded their in-house economics staffs and ever since have increasingly relied on those staffs for strategic advice as well as technical analysis in policy and litigation.

For the most part, this integration of economics into antitrust thinking has been highly positive.  It has been instrumental to ensuring that the antitrust laws focus on what they are intended to do – promote consumer welfare.   Forty years later, however, economics has gone beyond its role as the intellectual undergirding of antitrust policy.  Today, no litigant tries an antitrust case without utilizing one or more economists as expert witnesses, as economic analysis has become the dominant evidence in antitrust enforcement.  In this regard, the pendulum may have swung too far.

Prior to the mid-1970s, economists, though creating a sizable academic literature, were largely absent in setting antitrust policy and rarely participated in litigation.  The result was that, for much of the history of antitrust, the enforcement agencies and the courts often condemned business practices that intuitively looked bad, but without much further consideration.  Good economics, however, is sometimes counter-intuitive.  Many of these older decisions did more to protect competitors from legitimate competition than protect competition itself.  Integrating sound economic thinking into enforcement policy was thus an important corrective.

Economic thinking has been most impactful on antitrust policy in two areas: unilateral business conduct and horizontal mergers.  Older antitrust thinking often conflated protecting competitors with protecting competition.  The most devastating critique of this confusion came from the so-called “Chicago School” of economics, and manifested itself to the larger antitrust legal community through Robert Bork’s seminal 1978 book, The Antitrust Paradox.  It is hard to exaggerate the impact that this book had on enforcement policy and on the courts.  Today, it is rare that unilateral conduct is challenged successfully, the courts having placed a de facto presumption of legality on such conduct and a heavy burden on plaintiffs to show otherwise.

Horizontal merger policy likewise had a checkered history prior to the mid-1970s.  Basically, any merger that increased market concentration, even if only slightly, was considered bad.  The courts by and large rubber-stamped this view.  This rigid thinking began to change, however, with the expanded roles of the economists at the DOJ and FTC.  The economists pointed out that, although change in market concentration is important, it is not dispositive in assessing whether a merger is anticompetitive.  Other factors must be considered such as the incentives for outside firms to divert existing capacity into the relevant market, the degree to which there are barriers to the entry of new capacity, the potential for the merger to create efficiencies, and the ability of post-merger firms to coordinate pricing.  Consideration of each of these economic factors was eventually formalized in merger guidelines issued in 1982 by the Reagan Administration’s DOJ.  These guidelines were joined by the FTC ten years later and amended to consider mergers that might be anticompetitive regardless of firms’ ability to coordinate prices.

Each of these developments led to far more sensible antitrust policy over the past four decades.  Today, however, economic thinking no longer merely provides broad policy guidance but, in the form of highly sophisticated statistical modeling, increasingly serves to be the principal evidence in specific cases.  Here, policy-making may now be exceeding the limits of economic science.  Friedrich Hayek famously described the difference between science and scientism, noting the pretentiousness of believing that economics can generate the kind of precision that the natural sciences can.  Yet, the enforcement agencies are approaching a point where their econometric analysis of market data in certain instances may be considered sufficiently “scientific” to determine enforcement decisions without needing to know much else about the businesses or products at issue.

Much of this is driven by advancements in cheap computing coincident with the widespread adoption of electronic data storage by businesses.  These developments have yielded a rich set of market data that can be readily obtained by subpoena, coupled with the ability to use that data as input into econometric estimation that can be done cheaply on a desktop.  So, for example, if it is possible to estimate the competitive effects of a merger directly, why bother with more traditional (and tedious) methodology that includes defining relevant markets and calculating concentration indexes?  In principle, even traditional documentary and testimonial evidence might be dispensed with, being unnecessary when there is hard “scientific” evidence available.

This view is worrisome for two reasons:  The first is the already stated Hayekian concern about the pretense of economic precision.  Any good statistician will tell you that econometrics is as much art as science.  Apart from this concern, however, an equally important worry is that antitrust enforcement policy is becoming too arcane in its attempt to be ever more economically sophisticated.  This means that it is increasingly difficult for businesspersons and their counsel to evaluate whether some specific conduct or transaction could be challenged, thus making even lawful business strategies riskier.  A basic principle of the rule of law is that the law must be understandable to those subject to it.

Regrettably, the Obama Administration has exacerbated this problem.  For example, some officials have indicated sympathy for so-called “Post-Chicago Economics,” whose proponents have set out highly stylized models that purport to show the possibility of anticompetitive harm from conduct that has not yet been reached by antitrust law.  Administration officials also rescinded a Bush Administration 2008 report that attempted to lay out clearer guidelines regarding when unilateral conduct might be challenged.  Although these developments have been mostly talk and not much action in the way of bringing novel cases, even mere talk increases legal uncertainty.

The Administration’s merger policy actions are more concrete.  The DOJ and FTC issued new guidelines in 2010 that, in an effort to be even more comprehensive, proliferated the number of variables that can be considered in merger analysis.  In some instances, these variables will be resistant to reliable measurement and relative weighting.  The consequence is that the new guidelines largely defeat the purpose of having guidelines – helping firms assess whether a prospective merger will be challenged.  Thus, firms considering a merger must often do so in the face of substantially more legal uncertainty and must also expend substantial funds on attorneys and consultants to navigate the maze of the guidelines. These factors likely deter at least some procompetitive mergers, thus forgoing potential social gains.

Antitrust policy certainly must remain grounded in good economics, and economic analysis is certainly probative evidence in individual cases.  But it is nonetheless appropriate to keep in mind that no legal regime can achieve perfection, and the marginal benefits from efforts to obtain ever greater economic sophistication must be weighed against the marginal costs of doing so.  When litigation devolves into simply a battle of expert witnesses whose testimony is based on arcane modeling that neither judges nor business litigants grasp well, something is wrong.

It is time to consider a modest return to simpler and more transparent enforcement policy that relies less on black box economics that pretends to be more scientific than it really is.  To be sure, clearer enforcement rules would not be without enforcement risk.  Some anticompetitive transactions could escape challenge.  But, procompetitive transactions that otherwise might have been deterred will be a social gain.  Moreover, substantial social cost savings can be expected when business decisions are made under greater legal clarity, when antitrust enforcement is administered more efficiently, and when litigation costs are substantially lower.  The goal of antitrust policy should not be perfection, but to maintain an acceptable level of workable competition within markets while minimizing the costs of doing so.  Simpler, clearer rules are the route to this end.

Theodore A. Gebhard is a law & economics consultant.  He advises attorneys on the effective use and rebuttal of economic and econometric evidence in advocacy proceedings.  He is a former Justice Department economist, Federal Trade Commission attorney, private practitioner, and economics professor.  He holds an economics Ph.D. as well as a J.D.  Nothing in this article is purported to be legal advice.  You can contact the author via email at theodore.gebhard@aol.com.