Turning Back the Clock: Structural Presumptions in Merger Analyses and Revised Merger Guidelines
Since 1950, when Congress closed a loophole in Section 7 of the Clayton Act, the federal antitrust agencies have investigated actively, and prosecuted diligently, mergers the government believed could be anti-competitive. In 1976, the Clayton Act was amended to require notification of many mergers to the agencies before consummation, allowing the government to sue to stop these mergers before they occur. Throughout the decades, merger review has become an elaborate, expensive process consuming vast resources; involving the merging parties, their attorneys, various experts, and those in the government; and rarely ending in judicial proceedings. The large majority of mergers the government opposed were either abandoned or settled with agreements requiring asset divestitures before consummation.
Prospective merger screening at the federal antitrust agencies has evolved, using advances in theoretical and empirical economics, to deemphasize structural tests in favor of an effects-based analysis. The agencies’ merger guidelines have changed with this evolution in economic knowledge and agency practice. The goal of guideline changes has been to increase the predictability and accuracy of the agencies’ merger screening, thereby decreasing the social costs of merger enforcement.
Strident critics of modern antitrust law, including merger policy, hold each key competition job in the administration of Pres. Joseph R. Biden Jr., including heads of both the Federal Trade Commission (FTC) and the Antitrust Division of the U.S. Department of Justice. President Biden recently decried modern antitrust law and policy as a 40-year “experiment failed.” To correct these “mistakes,” the antitrust agencies plan to replace the 2010 Horizontal Merger Guidelines and the 2020 Vertical Merger Guidelines (already withdrawn by the FTC) with a new enforcement approach.
Periodic revisions to the merger guidelines ensure that they reflect current agency practice, recent legal developments, and sound antitrust policy. Given the current administration’s desire to alter significantly how the agencies analyze mergers, changes to the guidelines are necessary to ensure that they accurately describe the new agency practice. It is less clear, however, whether the planned changes to antitrust enforcement and guidelines will reflect current law or sound antitrust policy. Although the precise nature, including the operational details, of the new guidelines is unknown at this writing, the agencies not only have made their disdain for the guidelines of the past 40 years known, but also have expressed their affinity for the pre-1980 merger law that modern guidelines have repudiated. Both their request for comment on the guidelines, one year ago, and a recent speech from FTC Chair Lina Khan show this affinity. The request relied almost entirely on pre-1980 law; Chair Khan’s speech was even more explicit.
In September 2022 at Fordham Law School, FTC Chair Khan discussed her work on revising the merger guidelines and stressed “fidelity to the law” as a guiding principle. She claims that, starting in the 1980s, the antitrust agencies “began straying” by sidestepping “controlling precedent and the statutory text, including the 1950 amendments” through “administrative fiat.” The law to which Chair Khan refers relied on strict structural presumptions to proscribe mergers. As shown here, merger law then did much more, reflecting a populist animus against mergers. The result was an era when the only consistency in the cases, as Justice Potter Stewart famously remarked, was that “the Government always wins.” The case law was incoherent, illogical, and, most important, anti-consumer, condemning bigness for its own sake, even when the mergers were not especially large or in concentrated markets.
In her speech, Chair Khan also notes that a post–World War II FTC study showing growing industrial concentration was “cited extensively by Congress as evidence of the danger to the American economy in unchecked corporate expansions through mergers” and was a “major driver in the passage of the 1950 amendment.” David Cicilline, then Chairman of the House Judiciary Committee’s Antitrust, Commercial, and Administrative Law Subcommittee and a leading critic of recent antitrust enforcement, also cites the same historical evidence. Yet, the FTC study showing growing concentration as a result of merger activity was methodologically flawed and wrong on the facts. Scholars convincingly demonstrated the flaws in the study and its conclusions, and shortly thereafter the authors of this FTC study on concentration even conceded it was wrong. Concentration was in fact not growing, from mergers or otherwise, and may actually have been decreasing. The problems with this study were known shortly before Congress passed the 1950 amendments. The courts obviously were wrong to rely on this discredited study in the 1960s, and one is more puzzled still that the current administration finds it useful to approvingly cite a flawed and discredited study today.
Critics of antitrust enforcement since 1980 also cite newer studies that show increasing industry concentration and claim that this increase is associated with increases in aggregate markups and decreased competition. This evidence has the same flaws as the discredited evidence used to support the structural approach to merger control from the 1960s that the Biden administration admires. Industrial organization economists have repeatedly shown that reliable inferences about the competitive dynamics in antitrust markets cannot be derived from measures of concentration or correlations between concentration and aggregate markups. These advances in theoretical and empirical economics undermined the economic core of the structural approach and eventually caused the agencies under both political parties to abandon that approach to merger control. Surely, new evidence with the same flaws cannot support a return to structural antitrust.
To provide background on the issues and show the fallacy in returning to reliance on strong and simple structural presumptions, section II begins with a brief description of the economic evolution of the effects-based approach contained in the 2010 U.S. Horizontal Merger Guidelines and 2020 U.S. Vertical Merger Guidelines. Section III then examines the flawed economic evidence cited to support returning to strong structural presumptions. Section IV next analyzes why the statutory text of the 1950 amendment and the post-1950 merger law do not support turning the clock back to structural presumptions. Section V concludes.
A Brief History of the Evolution of Merger Guidelines in the United States
Merger guidelines describe how the agencies evaluate mergers and make enforcement decisions, serving two major functions. First, guidelines provide greater transparency, predictability, and consistency to actual and potential merging parties. Well-functioning guidelines increase the accuracy and reduce the cost of the government’s merger review process. Second, this description promotes sound antitrust policy by exposing the analytical framework that the agencies use to evaluate mergers to the antitrust community, especially practitioners and scholars. This exposure facilitates critical theoretical and empirical analyses that help refine and improve merger enforcement and policy. Periodic revisions ensure that merger guidelines do not become outdated and that they evolve to incorporate both changes in agency practice and theoretical and empirical advances in economic analysis.
The history of merger guidelines in the United States reflects this evolutionary process. Part A describes this evolution, while part B discusses the importance of continued analysis and evaluation of the government’s merger enforcement program.
- The Evolution of Merger Guidelines
The first guidelines, published by the U.S. Department of Justice (DOJ) alone in 1968, viewed horizontal mergers that increased market concentration as “inherently likely to lessen competition,” using structural thresholds that reflected a “low tolerance for mergers” consistent with Supreme Court decisions at that time. The 1968 Guidelines focused on market structure, that is, concentration measured by market shares within defined markets subject to substantial entry barriers.This structural analysis was to be conclusive in all but “exceptional circumstances,” generally rejecting efficiencies as irrelevant. In “highly concentrated” markets (defined as markets where the four largest firms had a combined share of at least 75 percent), mergers between firms each with at least a 4 percent share (8 percent combined) would ordinarily be challenged. In markets not “highly concentrated,” mergers between firms each with a 5 percent share (10 percent combined) would ordinarily be challenged.
At least for concentrated markets, this structural approach to merger enforcement was consistent with the results of the dominant form of economic research studying industry performance in the 1960s, the structure-conduct-performance (SCP) paradigm. Numerous economic studies used cross-sectional analyses and reduced-form regressions to study the relationship between inter-industry differences in concentration (usually measured by concentration ratios) and measures of industry performance (e.g., profits, margins, or prices). Many such studies found a positive association between indices of concentration and accounting measures of markups. As with the 1968 Guidelines, the levels of concentration that raised concerns were well below those reflected in later guidelines.
Crucially, the economic core of the simple structural approach crumbled because of theoretical and empirical advances in economics. Critics of the SCP literature noted that these observed associations between concentration and measures of performance do not identify the economic mechanism generating the relationship. Harold Demsetz and others demonstrated the failure of this approach to discern between competitive and anticompetitive outcomes. Others identified the lack of a coherent cross-industry theory of markets that would produce hypotheses that the SCP regression analyses could test. Empirical challenges to the SCP paradigm exposed the lack of robustness of their main empirical findings and demonstrated that the SCP paradigm’s empirical methodology failed to identify the causal effect of industrial concentration on market performance. Moreover, these works did not measure concentration in relevant antitrust markets. Rather, these studies used standard industrial classifications (SICs) that were much broader than relevant antitrust markets to calculate market shares and other concentration metrics. Still other studies questioned the use of accounting data in SCP papers and found that the traditional positive relationship between concentration and profits from the SCP regressions did not appear robust when better data were used to analyze this relationship.
The influence of the SCP paradigm evaporated when its empirical support evaporated. Perhaps the most effective empirical study came from Harold Demsetz, who sought to test the SCP paradigm on its own terms, accepting the premises of the SCP doctrine and the empirical studies that supported it. Demsetz examined whether the higher rates of return for large firms stemmed from less competition or greater efficiency. On the one hand, he argued that if the large firms have higher rates of return because of market power, then smaller firms in concentrated industries should also earn higher rates of return than smaller firms in unconcentrated industries because they would benefit from the lack of competition. On the other hand, he asked whether the large firms in concentrated industries were more profitable because they were more efficient, a term Demsetz defined broadly. In studying the profitability of smaller firms explicitly, Demsetz argued that if the explanation for the statistical finding of higher profits for the big firms was efficiency, then the smaller competitors, not as efficient as their larger brethren, would not have profits higher than those of smaller firms in unconcentrated industries. This simple but extraordinarily powerful test showed that the evidence supported efficiency, not market power.
The fall of the SCP paradigm undermined the economic core of antitrust merger policy based on simple market structure. Further, workable and credible solutions to the problems the critics identified with the SCP empirical studies proved elusive. Because of the inability to revive SCP, economists interested in studying market performance and mergers abandoned the SCP’s approach of using cross-sectional industry studies in favor of analyses of identifiable events in specific product markets. This literature has incorporated the credibility revolution in empirical economics that uses quasi-experimental empirical research designs, including the use of natural experiments, to identify a merger’s causal effects. Merger analysis at the agencies changed with this literature as the agencies deemphasized structure. The vast majority of antitrust reviews were resolved at the agency level rather than in the courts, and actual merger reviews were far more sophisticated than a simple structural approach based on market share screens. Although concentration still mattered, the agencies cleared or challenged mergers on the basis of a much richer evidentiary inquiry, including the ease of entry, the closeness of competitive substitutes, efficiencies, the reaction of customers of the merging parties to the proposed merger, and other relevant market facts.
The Horizontal Merger Guidelines evolved in parallel to reflect these changes in agency practice. Compared to the 1968 Guidelines, the 1982 Guidelines reduced the role of market concentration by expanding discussion of competitive effects and by stating an effects-based theme for merger enforcement—“mergers should not be permitted to create or enhance ‘market power’ or to facilitate its exercise.” The 1982 Guidelines also introduced more tolerant structural screens based on the Herfindahl-Hirschman Index (HHI). Reflecting their economic-centric orientation, the 1982 Guidelines further introduced several economic innovations that are still used in the current horizontal merger guidelines, including the articulation of economic principles for coordinated effects cases based on George Stigler’s A Theory of Oligopoly and the use of the Hypothetical Monopolist Test (HMT) to define antitrust markets.
The HMT was an especially important advance. The Supreme Court had previously offered only broad, qualitative guidance to define relevant antitrust markets. As discussed in section IV of this article, this guidance failed to produce a consistent and coherent framework of market definition and did not prevent the Court from defining indefensible and artificial markets. The HMT represented a breakthrough to determine the “playing field” to assess the specific competitive conduct at issue—whether it be mergers or monopolistic conduct. Simply stated, the HMT takes the smallest group of products and determines whether a hypothetical monopolist could impose a small but significant and non-transitory increase in price (SSNIP). If so, then the market is defined. Otherwise, the candidate market must be expanded, and the test is repeated until it is satisfied.
Although the HMT represented an important conceptual step forward, the question of how to determine whether a hypothetical monopolist would find an SSNIP to be profitable still remained. In 1989, Harris and Simons introduced critical loss analysis to answer just this question. Their critical loss analysis represented a systematic and intuitive way to implement the HMT using data relatively easy to obtain. Their critical insight was that when a hypothetical monopolist raises price, there are two effects. First, profits increase from higher margins on sales that are still made. Second, profits decrease from lost sales (and lost margins on those sales). Implementing the test, which compared the two effects, required simply the size of the pre-merger margins and the chosen level for the SSNIP (typically 5 or 10 percent). With this information, one can calculate the “critical loss” that a hypothetical monopolist can lose (expressed as a percentage of sales lost) before the SSNIP becomes unprofitable. For example, for a 5 percent SSNIP and a 20 percent pre-merger margin, the critical loss would equal 0.05/(0.05 + 0.20) = 0.20. The analysis then requires that evidence of actual loss be generated to compare to the critical loss. The actual loss must be less than the critical loss (20 percent in the example) for an SSNIP to be profitable for the hypothetical monopolist and for a market to be defined.
The 1992 Guidelines, the first jointly produced by both the DOJ and the FTC, added consideration of the unilateral effects of horizontal mergers in differentiated products markets to the 1982 Guidelines’ focus on coordinated effects. The 1992 Guidelines also revised the agencies’ treatment of both entry and efficiencies. For those working on mergers, the effect was to deemphasize concentration further with an increased requirement that, at least within the agencies, the government desired to understand the likely anti-competitive consequences of the merger.
The economic developments underlying unilateral effects analysis also inspired further change in implementing the HMT. Specifically, the 1992 version allowed for the SSNIP to be imposed on one, some, or all of the products controlled by the hypothetical monopolist. A firm-level approach to critical loss analysis also was developed to model critical loss explicitly as a firm-level optimization based on assumed functional forms for firm demand. This approach reformulated critical loss to make the actual loss endogenous and consistent with the differentiated products analysis used to analyze unilateral effects. This reformulation allowed the generation of bounds on the actual loss from a merger based on information the agencies normally use during an investigation (e.g., data on margins and diversion ratios). There is little doubt that the HMT and the use of critical loss continue to improve the predictability and usefulness of market definition. The agencies and the courts have widely adopted both to define relevant antitrust markets.
The 2010 Horizontal Merger Guidelines further extended the unilateral effects analysis that began in the 1992 Guidelines. These latest guidelines have represented not only an accurate description of two decades of agency practice, but also have been favorably cited by various courts and thus influenced the development of the case law, including judicial acceptance of theories of harm based on unilateral effects. Decades after the first merger guidelines, the latest iteration of the Guidelines recognizes concentration as only a starting point for analysis, suggesting a broader modeling exercise to account for substitutes, entry, competitive interactions, and the nature of consumer demand. Moreover, the 2020 Vertical Merger Guidelines eschew structural screens and extend the unilateral effects analysis to vertical transactions.
- Evaluating the Efficacy of the Government’s Merger Enforcement Program under the Guidelines.
A crucial question is whether the move toward an effects-based analysis actually increases the accuracy and predictability of antitrust enforcement. An effects-based antitrust analysis combined with broad antitrust standards and imperfect enforcement will still generate both type I (false prosecution) and type II (false non-prosecution) errors and may be more costly to administer. Indeed, many legal and economic commentators support a structural approach to merger litigation under section 7 of the Clayton Act; the Areeda and Hovenkamp treatise, for example, argues for simplification. Ultimately, it is an empirical question whether the unilateral and coordinated effects analysis in the modern merger guidelines performs better than a simple structural approach as in the 1968 Guidelines in identifying those mergers that produce anti-competitive outcomes. Moreover, even if the use of standards illuminated by complex models and data can theoretically outperform structural rules through lower error costs, the higher costs of administration could make a standards-based system more costly overall in practice.
Empirical methods that produce credible causal estimates of the competitive effects of mergers have become a critical and necessary input of any rational analysis of this policy. A common form of such an analysis uses event studies of consummated mergers involving competition in local or regional geographic markets (so that any price effects can be compared to a credible control). Indeed, the demonstration of large and anti-competitive price increases post-merger from such studies was an important part of the FTC’s response to a string of court losses challenging hospital mergers in the 1990s. One of the authors of this paper, Muris, then FTC chairman, formed the Merger Litigation Task Force in 2002 and tasked the agency staff members, especially the economists, with determining the exact effect in the relevant markets. The initiative led to several retrospectives of consummated hospital mergers, identifying two key issues. The first issue was the improper methodology many courts used to find large geographic markets that supported approval of the merger. The second issue was the not-for-profit status of the hospitals, because not-for-profit hospitals argued they would not raise prices. Merger retrospectives showed that these consummated hospital mergers involving both for-profit and not-for-profit hospitals resulted in large price increases in cases the government had challenged and lost. The initiative completely reversed how the agencies and the courts assessed hospital mergers. Government defeats were replaced with victories.
The FTC’s Bureau of Economics has also led efforts in conducting other retrospective analysis of transactions to examine the efficacy of merger enforcement and economic models used to analyze mergers. These analyses can help improve subsequent merger guidelines, providing courts and agencies with an increasingly reliable and predictable framework for studying the causal effects of mergers. The FTC’s Bureau of Economics and the DOJ’s Economic Analysis Group have produced more than 30 retrospective studies of mergers, with former staff of the FTC and the DOJ contributing many more.
These studies are the primary way that economists gauge the efficacy of enforcement. For example, Hosken, Olson, and Smith estimate the price effects of supermarket mergers in 14 U.S. markets in 2007 and 2008. The study uses variation in the levels of concentration in these markets to evaluate the structural thresholds contained in the 1992 Horizontal Merger Guidelines. Eight mergers are examined in highly concentrated markets (HHI above 2,500) and six in moderately concentrated markets (HHI between 1,500 and 2,500). They find that prices increased, relative to control markets, in 5 of the 14 markets, and 4 of the 5 occurred in highly concentrated markets. Prices in five markets decreased, with only one of the five occurring in highly concentrated markets. Prices seldom increased in unconcentrated markets following mergers.
This type of analysis is also important to gauge the accuracy of predictive methods used in an effects-based merger analysis. Retrospective analyses have examined unilateral effects models and merger simulation models and their predictions. For example, Garmon compares the predictions generated using pre-merger data to the actual post-merger price increases from retrospective analyses of numerous hospital mergers. He also compares the new screening tools agency economists use to evaluate hospital mergers to the HHI-based structural thresholds contained in the merger guidelines, concluding that the new models outperform structural approaches to merger enforcement.
Although there has been considerable progress in analysis, it is also clear that the screening tools are imperfect, suggesting that more is to be done. For example, Miller and Weinberg’s study of the MillerCoors joint venture, cleared by the DOJ without conditions in 2008, finds that the actual post-joint-venture prices of Miller and Coors products significantly exceeded the predicted net prices from the unilateral effects model. Moreover, they find significant post-joint-venture price increases for a product (Budweiser) produced by a close competitor. They conclude that the merger generated significant price increases from coordinated effects and led them to engage in research to update the analysis of coordinated effects.
- The Economic Evidence Used to Support Antitrust Reform
Section II provides examples of evidence commonly used to evaluate current merger control as well as to guide reform. By contrast, critics of the effects-based regime ignore this evidence in favor of three types of evidence evaluated here. The first type, discussed in part A of this section, is the historical evidence of rising concentration that was a major driver of the 1950 amendments to the Clayton Act. Part B discusses recent studies showing rising concentration or rising profit margins, both used to infer an association between rising concentration and increased monopoly power. Part C evaluates summary evidence of individual retrospective studies of consummated mergers. This article shows that this evidence, evaluated either individually or collectively, does not support a return to a structural approach to merger control. Each type of evidence in parts A–C contains major shortcomings that prevent causal inferences about the effect of mergers and merger policy on economic performance. As a result, they provide neither empirically informed inferences about the effects of concentration on economic performance nor a basis for the agencies to expand use of the structural presumptions. Part D then examines the available retrospective evidence on vertical mergers, finding it consistent with the approach taken in the 2020 Vertical Merger Guidelines and inconsistent with their subsequent withdrawal by the FTC.
- Historical Evidence of Rising Concentration
As discussed in this article’s introduction, proponents of a return to structural antitrust cite historical evidence used to support the 1950 amendment to the Clayton Act. Advocates for the 1950 amendment claimed that concentration in the United States was very high and, more troubling, rising,as a contemporaneous wave of mergers caused small businesses to disappear into larger firms.
These factual claims relied principally on a 1948 FTC study that claimed the importance of mergers in “promoting concentration has never been more clearly revealed than in the acquisition movement that is taking place at the present time” and that the “outstanding characteristic of the current merger movement has been the absorption of smaller, independent enterprises by larger concerns.” From this asserted factual premise, the FTC gloomily foretold that collectivism awaited America unless the Clayton Act was amended to prevent large corporations from merging their way to economic domination. Further, as FTC Chair Khan noted, the FTC study was a “major driver in the passage of the 1950 amendment.”
The 1948 FTC study was wrong. Lintner and Butters examined the data from the FTC study and found that overall merger activity during the 1940s was small relative to the growth in the economy and the internal growth of firms. Thus, a careful analysis of the data used did not show that merger activity was increasing overall concentration. Moreover, they found that“mergers were a much less important source of growth for large companies than for small companies,” that merger activity among large firms was negligible, and that merger activity among smaller firms reduced the share of the largest firms.The authors also found multiple methodological flaws. Their study measured concentration by the share of assets held by large corporations instead of traditional antitrust metrics such as quantity of sales, revenues, or capacity. In addition, the FTC study’s measure of assets contained numerous accounting problems and accounted for only one-third of the assets in its data. Lintner and Butters conclude that their reexamination produced results that “essentially reverse[d]” the FTC study’s overall conclusion.
The Lintner and Butters study was published just before final passage of the Clayton Act amendments in 1950. Proponents of the legislation alluded to Lintner and Butters only briefly during the debate, dismissing their study. Tellingly, after the amendment passed, the FTC’s economists acknowledged, buried in a footnote, that the 1948 study was wrong, stating that “if the Commission had made any general statement on this point, it would probably have concluded, based on its data, that the recent mergers have not substantially increased concentration in manufacturing as a whole.” Clearly, Congress should not have relied on this discredited study in 1950, and this judgment applies a fortiori to anyone who chooses to rely now on this 75-year-old study to support changes to current merger policy. As shown later in this section and in section IV, this history appears relevant to the current policy debate, but it does not support structural presumptions; instead, the proponents of change again rely on flawed analysis.
Two final points about historical concentration. First, the measure the FTC and Congress used in the 1940s, aggregate concentration, is of little utility even if calculated correctly. That measure asks what share of the total economy large corporations control. Proper antitrust analysis uses the economic specifics of the relevant market at issue and the effects of a merger or another practice on competition within that market, not generalized misconceptions about the role of large firms in the economy. Thus, basing aggressive antitrust enforcement on aggregate concentration trends would be misguided, whether or not such data showed increases.
Second, beyond the question of whether mergers were increasing concentration, there was no real evidence that concentration itself was increasing at all during this time. Lintner and Butters found that the increase in aggregate industrial concentration in the data was miniscule, less than a 1 percent increase in total over eight years. Areeda and Hovenkamp concluded that “to the extent Congress believed that the post–World War II American economy had experienced a rising tide of concentration, it was probably wrong. While firms had grown in size, the markets that they served grew as well.”Professor Morris Adelman, a well-known industrial organization economist from the Massachusetts Institute of Technology, reached similar conclusions in 1951 when he reviewed the evidence to analyze changes in manufacturing concentration since 1901. In assessing whether any increase in aggregate concentration had occurred and recognizing the uncertainty given the limits in the data, he concluded: (a) the “odds are better than even that there has actually been some decline in concentration,” (b) it was “a good bet that there has been no actual increase,” and (c) the “odds do seem high against any substantial increase.” Thus, there was a lack of evidence for increased concentration in the 1940s, and the same general conclusion could also be reached for the first half of the century, after the substantial activity of the trusts decades before. In 1960, Professor Derek Bok concluded that the contemporary economic literature from the 1940s and 1950s found no trend toward increased concentration or harmful effects from mergers.
Subsequent research and analysis have largely confirmed that the postwar merger movement was rather harmless. Concentration likely did not increase significantly from 1940 to 1947 in more than a few industries, and there is a clear consensus that overall industrial concentration rose by no more than a point or two during this period, if that much. Indeed, very serious doubts exist whether concentration had increased at all since the formation of United States Steel Corporation at the turn of the century. Various writers have also concluded that the latter-day mergers were more innocuous than their predecessors. Though it is generally conceded that the great corporate combinations at the turn of the century often desired to control markets through a variety of composition, anti-competitive motives became increasingly rare in later years, replaced by various tax, managerial, and commercial considerations of mostly neutral value for antitrust.
- Newer Studies of Increasing Concentration
Critics of antitrust enforcement since 1980, including those who now control Biden administration antitrust policy, also cite newer studies that show increasing industry concentration as well as evidence of a concurrent increase in aggregate markups. Yet, these newer studies suffer the same problems as the now-abandoned SCP paradigm discussed in section II of this article. By itself, an association between a measure of concentration and measures of firm profitability is consistent with either a reduction in competition from the exercise of market power or an increase in competition from more competitive firms enjoying greater success. Further, the crucial study from Demsetz, discussed in section II, ignoring arguendo the flaws of the accounting data used, is consistent with efficiency, not market power. As with the earlier literature, today’s evidence on the source of growing firm markups is consistent with efficiency, not market power.
Tellingly, the recent studies also suffer from the same basic flaws as the 1948 FTC historical concentration study discussed in part A: it is not clear that concentration, properly measured, has been rising. In particular, the modern studies use measures of national concentration based on industrial sectors that are much broader than relevant antitrust markets. When local geographic and product-based measures are used, concentration decreases. Moreover, despite the decades of economic learning discussed in section II of this article that warn against making causal inferences about changes in competition from associations between aggregate measures of concentration and markups, today’s call for enhanced concentration-based presumptions depends on a weak, unreliable form of such evidence—increasing national concentration and increasing average markups in broad industry sectors divorced from the actual product and geographic markets used in antitrust analyses. Because this newer evidence has the same problems as the discredited evidence used to support the old structural approach to merger control, one has difficulty seeing how such evidence supports a return to structural antitrust and the use of strong concentration-based presumptions. In fact, the newer literature is worse methodologically. Whereas the older SCP literature used a flawed approach that applied reduced form regression analysis to study the relationship between structure and economic performance, much of the current literature does even less. Instead, separate studies measure general trends in concentration and general trends in markups. Causality is inferred improperly from the fact that these trends are occurring roughly at the same time.
Supporters of a return to a structure-based law frequently cite three studies that show measures of concentration increased during parts of the past 40 years: Furman and Orszag (2018), a study published in The Economist in 2016, and an article by Autor et al. (2020). These studies show increases in their measures of national industrial concentration during various times throughout the final part of the 20th century and the start of the 21st century. Furman and Orszag use firm revenues from census data to calculate a 50-firm concentration ratio for two-digit North American Industry Classification System (NAICS) sectors from 1997 to 2007. The Economist’s study uses census data to calculate four-firm concentration ratios for four-digit NAICS sectors from 1997 to 2012. Also, Autor et al. report changes in 20-firm concentration ratios and 4-firm concentration ratios and HHIs based on firms grouped in 4-digit SIC codes from 1982 to 2012.
One cannot make valid economic inferences about the effect of mergers or merger policy from these studies. Antitrust enforcement measures concentration is based on properly defined antitrust markets, and the three studies define neither proper product nor geographic markets. Groups of firms in two- or four-digit national NAICS sectors or SIC codes are not the same as firms in modern antitrust markets defined by applying the Hypothetical Monopolist Test. The sector codes group firms according to broad industrial classifications (e.g., transportation and warehousing, retail trade); antitrust markets tend to be narrower (e.g., nonstop local-passenger airline service between Detroit and Philadelphia, consumable office supplies sold through office superstores). Werden and Froeb demonstrate empirically how two- or four-digit national NAICS sectors aggregate relevant antitrust markets, showing that concentration measures based on the aggregation of many potential antitrust markets within even the least aggregated census data sectors can be “over a hundred times too aggregated.” They conclude that concentration measures based on aggregated industrial sectors “are apt to mask any actual changes in the concentration of markets, which can remain the same or decline despite increasing concentration for broad aggregations of economic activity” and that the observation of increasing concentration under these measures thus does not indicate whether antitrust reform is needed.
National-level census data used to calculate concentration metrics also generate similar problematic aggregation issues when the relevant antitrust geographic market is local. Studies of concentration metrics based on local geographic markets show the problem. Rossi-Hansberg et al. (2020) use National Establishment Time Series data between 1990 and 2014 to document national and local concentration based on firms within an eight-digit SIC code level. While national levels of concentration in eight-digit sectors increased, concentration in these sectors, when measured locally, decreased between 1990 and 2014. Specifically, they find that “the more geographically disaggregated the measure of concentration, the more pronounced its downward trend over the last two and a half decades.” Although the use of sectors, even at the eight-digit level, precludes any strong inference regarding the nature of competition in antitrust markets from this study, it does illustrate the presence of serious geographic market aggregation problems in the national studies.
Similarly, Benkard et al. (2021) document problematic aggregation in broad sectors to compute concentration. They use brand purchase data from the MRI-Simmons Survey of the American Consumer from 1994 to 2019 to define product markets, finding that concentration declines both nationally and locally. The authors then aggregate these product markets into broad sectors, consistent with NAICS sectors, and find that concentration levels rise, both nationally and locally. Moreover, with these results the authors theorize that the decline in local concentration levels coupled with the observed increase in sector concentration could reflect increased competition in local geographic and product markets—that is, by firms becoming more efficient and expanding into markets in which they previously did not sell. For example, with glue products, they note that Gorilla Glue entered the market in 1999 and increased its market share to above 30 percent in 2019, gaining a large fraction of the share lost by the dominant brands Elmer’s and Krazy Glue. They also observed the parent of the Gorilla Glue company entering into other product markets such as skin care by 2019. Similarly, they note that new entry by consumer goods giant Procter and Gamble decreased concentration in rubber gloves by taking a share from market leader Playtex.
Besides studies of concentration trends, proponents of structural tests commonly cite studies that show increases in firm markups. The frequently cited 2020 study of De Loecker et al. uses firm-level accounting data for publicly traded firms from 1950 to 2014 to show that markups for these firms increased since 1980 and that this increase reveals an economy-wide increase in market power. As Demsetz found, increased markups can also be consistent with increased competition. Basic economic theory informs that an increase in markups can be explained either by price increases or by marginal cost reductions, and distinguishing the two effects is important for inferences about market performance and antitrust policy. If the latter effect drives the increases, then increased margins do not signal systemic increases in market power.
Döpper et al. (2021) use Nielsen scanner data to estimate marginal costs, prices, and markups between 2006 and 2019 for hundreds of consumer product categories (e.g., beer) to measure whether higher prices or lower marginal costs were driving increases in markups. Their data show, consistent with De Loecker et al., that markups have increased over time. Nevertheless, they challenge directly De Loecker et al.’s conclusion that a systematic increase in market power caused higher markups. Rather, their data show that marginal cost reductions largely drove the increases in aggregate markups. Rather than showing increasing market power, Döpper et al. show that firms became more efficient, a result consistent with more competition, not less.
- Kwoka’s Merger Retrospective Summary
As discussed in section II of this article, merger retrospectives are commonly used as valuable evidence to test merger policy. Care is necessary, however, both in conducting merger retrospectives and in making broad inferences about merger policy from individual studies. As Carlton states, two basic requirements exist for a retrospective study to provide relevant information about agency merger policy and antitrust reform: (a) data on the relevant market both pre-merger and post-merger and (b) the specific predictions the government made about the post-merger outcome. A credible and causal retrospective design also requires reliable pre-merger and post-merger data on appropriate control firms. Consequently, “retrospective studies that ask whether prices went up post-merger are surprisingly poor guides for analyzing merger policy.” As the calls to reform antitrust increase—as does the belief that antitrust agencies were unable or unwilling to bring cases—this caution is more relevant than ever.
Some commentators have attempted to aggregate the results of retrospective analyses to argue that antitrust enforcement has been lax during the past 40 years. In particular, Kwoka performs a meta-analysis of existing merger retrospective analyses. He finds that the unweighted average price effect from the retrospective studies is a positive 4.1 percent. From this, he concludes that merger policy has been too permissive.
There are many reasons for caution about such inferences. As discussed in section II of this article, although merger retrospectives are useful to diagnose specific issues in court cases and can also validate the predictions generated by enforcement tools such as merger simulations, such studies are possible only in particular circumstances. The lack of data or a credible control group will prevent a credible study, and such studies can be done only for consummated mergers. Consequently, the set of analyses will not represent a random selection of mergers, and most mergers are not studied. Thus, any inference from a group of studies must carefully consider these selection issues and temper the policy inferences accordingly.
In particular, mergers that are successfully blocked or abandoned cannot be studied because post-merger data do not exist. Thus, Kwoka’s methodology cannot observe the rate and costs of type I errors (efficient mergers that were blocked under the agency’s merger standard). Because his analysis of retrospective studies can observe only one of the two types of error from any imperfect merger policy (type II errors—when anticompetitive mergers are allowed), one cannot answer the question of whether the standard for challenging a merger is too lax. Answering this question would require knowledge of how the sum of both type I and type II errors would change as the agency’s standard for blocking a merger changed. Indeed, while the unweighted average price increase is 4.1 percent, the median price increase is only 0.8 percent. Thus, the median merger has a price effect close to zero (it is not clear whether that effect is or is not statistically significant, because Kwoka does not report the standard errors that would be needed to determine statistical significance). Given the uncertain nature of enforcement, an optimal merger standard that balanced the costs of type I and type II errors might well have generated the distribution of price effects and observed rate of type II errors for consummated mergers that Kwoka finds.
Moreover, other issues with the study are apparent. Meta-analyses that combine a diverse set of studies generally weight the results of an individual study using the inverse of the standard error of the estimate. This method gives more weight to studies that generate precise estimates of the effect of the merger and underweights results from studies that generate imprecise effects. As Vita and Osinski note, Kwoka fails to do this weighting. In addition, the study does not consider how the evolution of merger policy, reflected in the current 2010 Horizontal Merger Guidelines, has addressed the problems examined in merger retrospectives of transactions that occurred before merger policy was updated. Consider, for example, the retrospective studies conducted as part of the 2002 Merger Litigation Task Force discussed in section II of this article. Those studies were based on hospital mergers that the federal antitrust agencies challenged unsuccessfully. Thus, the resulting large price increases observed in these studies were not the result of lax agency enforcement. Moreover, as also described earlier, the 2002 Merger Litigation Task Force improved how the agencies and courts evaluated hospital mergers, reversing the agencies’ losing streak. Today, these mergers would likely be challenged and successfully blocked under current merger policy and merger law, and many of these transactions likely would not even be attempted under current antitrust policy. One has difficulty seeing how the observation of large price increases from pre-2000 mergers imply anything about the need to change hospital merger policy, the 2010 Guidelines, or merger enforcement that exists today.
- Retrospective Evidence and Presumptions for Vertical Mergers
Although the primary focus of this article is horizontal mergers and guidelines, we discuss vertical mergers briefly here. The message is clear: no empirical or theoretical support exists for applying structural presumption to vertical mergers. The price effects of vertical mergers are theoretically ambiguous, even before consideration of efficiencies, and are not predictably related to the structure of either the upstream or the downstream markets. In contrast to horizontal mergers that combine substitutes and generate upward pricing pressure absent efficiencies or competitive responses, vertical mergers that combine complements can generate downward pricing pressure through the elimination of double marginalization (EDM), even without efficiencies. Specifically, when two firms in a vertical chain set prices independently, both price above marginal cost. The result is higher consumer prices and lower output and joint profits than in a vertically integrated firm that coordinates the pricing decisions to eliminate the inefficient double margin. With vertical integration, including vertical mergers, one of the fundamental theorems of microeconomics, and also a first principle of antitrust law’s differential treatment of vertical as opposed to horizontal conduct, is that generally, “[c]ombining substitutes is bad, and combining complements is good, unless demonstrated otherwise.”
EDM is not the only pricing incentive of a vertical merger. Antitrust economics also recognizes that vertical integration, including vertical mergers, can generate incentives to foreclose rivals from inputs or customers to increase power over price through diversion of the harmed rivals’ sales, reducing consumer welfare. As the 2020 Vertical Merger Guidelines highlight, “[a] vertical merger may diminish competition by allowing the merged firm to profitably use its control of the related product to weaken or remove the competitive constraint from one or more of its actual or potential rivals in the relevant market.”
Notably, both economic theory and evidence show that the incentives from vertical mergers to engage in anti-competitive foreclosure or actions raising rivals’ costs (RRC) and the pro-competitive elimination of double marginalization create opposing, and inherently intertwined, effects on prices. Consistent with theory, the empirical literature estimating the unilateral price effects of vertical mergers using causal retrospective designs finds evidence of both EDM and RRC, with effects on consumer welfare described as pro-competitive to mixed.
For example, Hosken and Taylor examine the recent staggered state-by-state exit of Exxon-Mobil from retailing. They measure the price effects, relative to control, from voluntary disintegration by this gasoline refiner in two states, finding that exiting retailing eliminated downward price pressure from EDM by about 1.2 cents per gallon, while eliminating upward pricing pressure from RRC by a similar magnitude, leaving the retail price of gasoline effectively unchanged. Given these results, the choice to exit retailing likely was not based on incentives from unilateral price effects, but on the considerations identified in the transactions cost literature.
Other studies show net gains to downstream consumers even when downstream competitors of the vertically integrated firm face higher input prices. For example, a recent analysis of the effects of vertical integration between cable distributors and regional sports networks concluded that “on average across 26 [regional sports networks], we find that there would be a statistically significant positive effect on consumer welfare from vertical integration, despite the incentives for foreclosure that it would create.”
Moreover, one survey of the recent empirical literature using credible causal retrospective studies to evaluate the effect of vertical mergers finds that nearly all studies that identified foreclosure effects show no corresponding decline in consumer welfare—evidence strongly consistent with the concurrent existence of pro-competitive EDM. Another survey of this literature finds the evidence more mixed, but the authors still conclude that “the economic literature demonstrates a variety of effects of vertical integration—including foreclosure and efficiencies—that justify examining vertical transactions on their merits rather than making general assumptions about their competitive effects.” Lafontaine and Slade similarly conclude “that while some vertical mergers may raise concerns, the evidence at this point does not provide sufficient guidance to develop presumptions that are related to strictly vertical issues.”
Given these results, the 2020 Vertical Merger Guidelines (VMGs) reflect then-current agency practice as well as the latest economic knowledge, with no structural screens. They treat any downward pricing pressure from EDM not as an efficiency but rather as a unilateral price effect to be assessed together with any upward pricing pressure from RRC. Predictions of net pricing pressure are not to be made up or assumed. Rather, such predictions must be derived from real-world data, such as observed margins and diversions, and analyses that account for market realities, such as contracts, and the use of non-linear pricing that would affect these incentives. This requirement applies to pre-merger predictions about EDM, RRC, and the combined net effect on prices.
Nevertheless, the FTC unilaterally withdrew the 2020 VMGs. Despite the absence of empirical or theoretical support for applying structural presumptions to vertical mergers, the FTC majority’s withdrawal statement discusses plans to use “market structure–based presumptions for non-horizontal mergers.” They also objected to the 2020 VMGs’ treatment of EDM as “likely to exist.” The FTC majority’s explanation of their withdrawal asserted:
The VMGs’ emphasis on a non-statutory efficiency defense leads to their most significant flaw—their treatment of the elimination of double marginalization (EDM)…. The VMGs’ reliance on EDM is theoretically and factually misplaced. It is theoretically flawed because the economic model predicting EDM is limited to very specific factual scenarios: mergers that involve one single-product monopoly buying another single-product monopoly in the same supply chain, where both charge monopoly prices pre-merger and the product from one firm is used as an input by the other in a fixed-proportion production process. Yet outside this limited context, economic theory does not predict that EDM will create downward pricing pressure. [footnotes omitted]
This statement is puzzling. As just discussed, under the 2020 VMGs, EDM is not a statutory efficiency defense, but one of the unilateral pricing effects that must be considered. Moreover, the statement contains numerous economic errors. As Shapiro and Hovenkamp explain, these assertions are “flatly incorrect as a matter of microeconomic theory”:
EDM applies (a) to multi-product firms, (b) regardless of whether the firms at either level have monopoly power or charge monopoly prices, and (c) regardless of whether the downstream production process involves fixed proportions. All of this has been included in economics textbooks for decades … None of the conditions cited by the majority are required for EDM to apply, although they are clearly relevant when one is measuring EDM in a specific vertical merger. While EDM does not save every vertical merger, it should be part of any vertical merger inquiry and is not nearly as limited as the majority’s statement suggests. … [Moreover]…[i]n drafting its statement, the majority appears not to have consulted with the FTC’s own Bureau of Economics. As a result, we have the spectacle of a federal agency basing its policies on a demonstrably false claim that ignores relevant expertise.
- Warren Court Merger Law
As the introduction to this article notes, in a September 2022 speech, FTC Chair Khan stressed “fidelity to the law” as a guiding principle for merger guidelines. She argues that during the past 40 years, the agencies “sidestepped controlling precedent and the statutory text” by “administrative fiat.” Chair Khan’s arguments are without merit. This section considers first the statute itself and then the Warren Court’s merger decisions.
- The Statutory Text
Consider the statute first. Section 7 prohibits a merger where the “effect of such acquisition may be substantially to lessen competition, or to tend to create a monopoly.” Therefore, competition is to be protected, with the statute not defining an effect on competition.
Modern guidelines analyze a merger in its totality to determine whether there is a likely anti-competitive effect. Most important, concentration is no longer sufficient; even a merger among leading firms in a concentrated industry may still pass muster for various reasons, including the merger’s justifications and the likely absence of anti-competitive effects. Concentration can matter, but it is not conclusive in evaluating competition. The statutory language supports this modern approach. Until President Biden and his administration, modern enforcers argued that this approach was the best reading of the statutory requirement of likely lessening of competition. Competition, after all, is inextricably linked with the effect of business actions on those in the marketplace, especially the ultimate beneficiaries of competition—consumers.
Chair Khan also relies on precedent, criticizing her predecessors for lack of fealty to “controlling” case law. She quotes Judge Richard Posner, who observed, in upholding an FTC challenge to a hospital merger, that the agency did not rely on the Supreme Court’s section 7 decisions from the 1960s, except for United States v. Philadelphia National Bank, 374 U.S. 321 (1963), although none had been overruled.
Yet two paragraphs later, Judge Posner explained that “it was prudent for the Commission, rather than resting on the very strict merger decisions of the 1960s, to inquire into the probability of harm to consumers.”Judge Posner continued:
The most important developments that cast doubt on the continued vitality of such cases as [Brown Shoe Co. v. United States, 370 U.S. 294 (1962)] and [United States v. Von’s Grocery Co., 384 U.S. 270 (1966)] are found in other cases, where the Supreme Court, echoed by the lower courts, has said repeatedly that the economic concept of competition, rather than any desire to preserve rivals as such, is the lodestar that shall guide the contemporary application of the antitrust laws, not excluding the Clayton Act…. Applied to cases brought under section 7, this principle requires the district court (in this case, the Commission) to make a judgment whether the challenged acquisition is likely to hurt consumers, as by making it easier for the firms in the market to collude, expressly or tacitly, and thereby force price above or farther above the competitive level. 
Four years later, then-Judge Clarence Thomas, in an opinion joined by then-Judge Ruth Bader Ginsburg before both judges joined the Supreme Court, quoted Judge Posner approvingly in rejecting a DOJ challenge to a merger.
One of the authors of this article, Muris, was an FTC official in the 1980s who rejected reliance on previous anti-consumer merger law. The FTC then wanted to avoid citing the decisions that increasingly were disfavored and to show that enforcement was in fact changing. In any event, change was necessary because the judiciary had begun uniformly to reject the theories that had been accepted in the 1960s, into the 1970s. Indeed, the courts were forcing change on the government by the early 1980s. As discussed in sections II and III in this article, academics led the rejection of both populist merger policy and the economic theories then popular in the 1960s in industrial organization that attacked even modest levels of concentration. The courts next changed, dramatically so, as the FTC’s judicial record attests. Through 1976, the agency rarely lost, whether in mergers or elsewhere, except perhaps in Robinson–Patman Act cases. In cases decided in the next six years, however, the FTC won just 13 of 35 substantive decisions, only 8 of the 22 involving mergers. The agency was winning only 36 percent of its merger cases, a shocking record by historical standards, whether before or since.
Today, in the 30-plus years since Judges Posner and Thomas spoke, joined by Judge Ginsburg, the courts, led by the Supreme Court, have recast antitrust law. That change, across a broad spectrum of cases, has repudiated the populist, competitor-protection underpinnings of the merger law that Chair Khan praises in favor of standards protecting consumers. Although the Supreme Court has not spoken on a substantive merger analysis in a contested case in decades, there is every reason to expect that it will continue its long-standing promotion of consumer welfare when it next considers a merger.
Those who favor change today often rely on claims of Congressional intent supporting more aggressive merger policy. Contrary to statutory interpretations that were more prevalent in the 1960s, judges today rely much less on divining intent from legislative history and other nontextual sources and focus instead on the statute’s text. The key statutory passage, quoted earlier, rests on the meaning of “competition.” If Congress wanted courts to ignore economics and instead give primacy to whether a merger would harm competitors, then today’s courts are likely to insist on legislative text that so mandates this reading. Such language does not exist in section 7 of the Clayton Act.
- The Warren Court Merger Decisions
The major cases reveal on their own terms why they were repudiated across the antitrust spectrum until the arrival of the Biden enforcers. As one the authors of this article, Muris, has documented in more detail, the 1960s in particular produced a case law that was incoherent, illogical, and, most important, anti-consumer, condemning bigness for its own sake—even when the mergers were not particularly large nor the industries concentrated.
The first case the Supreme Court decided under the 1950 amendments involved a merger between Brown Shoe and Kinney, challenged both horizontally and vertically. The Court found the vertical aspects illegal despite lack of evidence of any potential harm to consumers including that the potential foreclosure from the vertical merger was trivial. Perhaps recognizing that the opinion protected small businesses, not consumers, in an infamous passage Chief Justice Earl Warren’s opinion for the Court tried to explain:
Of course, some of the results of large integrated or chain operations are beneficial to consumers. Their expansion is not rendered unlawful by the mere fact that small independent stores may be adversely affected. It is competition, not competitors, which the Act protects. But we cannot fail to recognize Congress’ desire to promote competition through the protection of viable, small, locally owned business. Congress appreciated that occasional higher costs and prices might result from the maintenance of fragmented industries and markets. It resolved these competing considerations in favor of decentralization. We must give effect to that decision.
In other words, the Court “protected” competition by protecting inefficient competitors, thereby harming consumers. The Court basically implies that a merger that would lower costs and prices through vertical integration would still be condemned if it offended the populist goal of decentralization. Such thinking would permeate subsequent litigation for years. At the FTC, the argument that vertical integration lowers prices and therefore harms rivals, and thus should lead to illegality, underscored numerous cases through the 1970s.
While the Court’s vertical analysis most directly reflected populism, the horizontal discussion would also contribute to the economic incoherence in future cases. Brown was the third-largest retailer of shoes nationally, Kinney was the eighth largest, and the combined national market share was only 4.5 percent. The business was very unconcentrated; the 24 largest shoe retailers accounted for only 35 percent of sales. The DOJ did not appeal the trial court’s decision that the merger was a violation in a national manufacturing market. Instead, the horizontal issue before the Court was whether the merger reduced local retail competition illegally. The record suggests a handful of local markets in which the combined share could raise issues under modern standards. The modern approach to such local overlaps would be to divest one of the parties’ stores in the overlapping location to a third-party buyer, not to stop the entire merger.
But the Court’s discussion of horizontal retail overlaps ignored this approach. Instead, the Court listed 118 separate cities where the combined shares exceeded 5 percent for men’s, women’s, or children’s shoes and 47 cities where the share exceeded 5 percent in all three products.A combined retail share of 5 percent could not raise meaningful competitive issues, yet the Court declared it could not approve such a merger.That declaration would form the starting point and standard for the many merger cases to come.
The next Supreme Court decision, Philadelphia National Bank,  used a significantly different approach than Brown Shoe, and it was the one opinion that Judge Posner said the Reagan Administration continued to cite. It created the legal framework for merger review that remains relevant today, especially in litigation. Compared to Brown Shoe, Philadelphia National Bank involved firms with significantly higher market shares, at least 30 percent, with an opinion that relied on market share presumptions then favored in antitrust economics. Under the decision, a merger is presumptively illegal if it “produces a firm controlling an undue percentage share of the relevant market, and results in a significant increase in the concentration of firms in that market” because such a merger “is so inherently likely to lessen competition substantially.” Such a presumptively illegal merger “must be enjoined in the absence of evidence clearly showing that the merger is not likely to have such anticompetitive effects.” As the Court concluded, this framework for litigation dispenses with “elaborate proof of market structure, market behavior, or probable anticompetitive effects.”This simple approach was justified in significant part on the basis of the false populist factual predicate in the Congressional history of a trend toward concentration discussed in section III of this article.
In practice, the Philadelphia National Bank framework has meant that once a market is defined and market shares calculated, the merger will be found presumptively illegal if the shares meet certain thresholds. During the past 60 years, once the presumption was triggered, only rarely have other competitive factors saved a merger in court.
As discussed previously, the vast majority of antitrust reviews have been resolved at the agency level in instances where actual merger analyses were far more sophisticated than the simple Philadelphia National Bank framework. Settlements through consent decrees were far more common than litigation. Nonetheless, if an agency challenged a merger and a remedy could not be agreed on, litigation over the merger usually devolved to the Philadelphia National Bank framework. In court, the FTC and the DOJ typically downplayed the sophisticated analysis used internally with its focus on effects and argued that simple merger screens create a presumption of illegality under Philadelphia National Bank. Because those screens can be outcome determinative, merging parties often defended with issues such as market definition, because the government must establish them before the Philadelphia National Bank presumption could be invoked to control the case.
Simplifying merger review in the courts can preclude a detailed analysis of overall competitive effects. To minimize error, the framework depends greatly on properly defined markets and on setting appropriately the market share thresholds for applying the presumption of illegality. Otherwise, the simplified framework can greatly over deter procompetitive and efficient mergers. The Supreme Court exacerbated the problem in the 1960s and early 1970s by ignoring economics and consumer welfare to pursue an aggressive populist agenda of “big is bad.” This agenda was effectuated both by lowering the market share thresholds and through arbitrary market definitions so that the calculated market shares satisfy the thresholds.
The Court’s decisions in United States v. Von’s Grocery Co. and United States v. Pabst Brewing Co. illustrate the first effect.In these two cases, the Court lowered the Philadelphia National Bank threshold for finding a merger presumptively illegal to the 5 percent range as sufficient to find a substantial lessening of competition in Brown Shoe. Von’s Grocery, decided in 1966, became the most notorious. There, the Court blocked a merger between two grocery chains, with a combined share of 7.5 percent, in the highly competitive, and rapidly growing, Los Angeles area. It was Von’s Grocery that prompted Justice Potter Stewart’s famous quip that the “sole consistency that I can find is that, in litigation under § 7, the Government always wins.” While lower courts would read Von’s Grocery as consistent with lowering the share thresholds under Philadelphia National Bank to single digits, and the later Pabst Brewing decision would apply the presumption to combined shares of 4.5 percent, the majority opinion in Von’s Grocery did not rely on the presumption. Instead, the opinion was pure populism, emphasizing the decline of individual stores in Los Angeles and the rise of grocery chains and supermarkets.Simultaneously, grocery store chains were becoming more prevalent. To the Court, “powerful business combinations” were driving out of business “‘small dealers and worthy men.’” These trends in the Los Angeles grocery store business increased the “concentration of economic power in the hands of a few,” contrary to a desire to “preserve competition among a large number of sellers.” Such “fear of the evils which flow from monopoly,” the Court argued, was why Congress amended section 7.
In fact, the trends on which the Court focused simply reflected technological change and lower costs, not monopoly and concentration of economic power. The majority ignores that operation of larger chains was more efficient and that consumers preferred them. As Justice Stewart also noted in his dissent, the Court’s decision was simply a populist attempt to turn back the clock:
Section 7 was never intended by Congress for use by the Court as a charter to roll back the supermarket revolution. Yet the Court’s opinion is hardly more than a requiem for the so-called “Mom and Pop” grocery stores—the bakery and butcher shops, the vegetable and fish markets—that are now economically and technologically obsolete in many parts of the country. No action by this Court can resurrect the old single-line Los Angeles food stores that have been run over by the automobile or obliterated by the freeway. The transformation of American society since the Second World War has not completely shelved these specialty stores, but it has relegated them to a much less central role in our food economy. Today’s dominant enterprise in food retailing is the supermarket. Accessible to the housewife’s automobile from a wide radius, it houses under a single roof the entire food requirements of the family. Only through the sort of reactionary philosophy that this Court long ago rejected in the Due Process Clause era can the Court read into the legislative history of 7 its attempt to make the automobile stand still, to mold the food economy of today into the market pattern of another era.
Lowering the share threshold for applying the presumption stripped it of any claim to economic coherence. As Areeda and Hovenkamp summarize:
Beginning with Philadelphia National Bank, later decisions made “undue” or “substantial” market shares presumptive proof of illegality, apparently rebuttable only by proof that the acquired firm were a “failing company.” Moreover, the threshold of “substantial” aggregate shares fell quickly from the 30 percent figure in Philadelphia National Bank to 7 or 8 percent in Von’s Grocery and to 4.5 percent for a majority of the Court in Pabst Brewing. Thus, the Department of Justice in its 1968 Merger Guidelines could confidently state that absent a “failing company” defense, it ordinarily would challenge a merger involving two firms each with 4 percent or more of a highly concentrated market, or 5 percent or more in any market.
No economic basis existed to believe that an industry with this many sellers could have noncompetitive behavior. As the Areeda and Hovenkamp treatise notes: “Reducing the numbers of sellers in a market from 1,000 to 100 or even to 50 is not an economically meaningful increase in concentration. Fifty firms are far too many for recognized interdependence or for other than the most overt and readily detectable collusive price fixing.”
Besides lowering the share thresholds, the Warren Court’s cases manipulated market definitions. In the first two Supreme Court cases on Section 7 decided after Philadelphia National Bank—United States v. Aluminum Co. (Rome Cable), and United States v. Continental Can Co.—the Court defined implausible, gerrymandered markets.Even with the Brown Shoe standard that a very low market share was sufficient to find illegality, these cases lacked the necessary share under a proper market definition, and the government had lost both when the trial court rejected the proposed definition. Because the cases involved large companies before a Court inclined to oppose bigness for its own sake regardless of whether the merger harmed consumers, the Court gerrymandered the market definitions.
United States v. Continental Can Co. illustrates the sleight of hand.Continental Can, with about a 33 percent share of metal container sales,acquired Hazel-Atlas Glass Company, the third-largest provider of glass containers at about a 10 percent share.Because Continental Can sold no glass containers, while Hazel-Atlas sold no metal ones, a market defined as glass or metal containers separately would lack competitive overlap. Sellers of both metal and glass containers, however, competed not only with each other, but also with sellers of other types of containers, especially plastic.
At trial, the government’s theory was that it could prove its case simply by showing that there was substantial competition between metal and glass containers, that the metal and glass industries were separately highly concentrated, and that the merging parties were each “dominant” in their respective industries. The trial court concluded that the government had not shown a reasonable probability of substantial anti-competitive effects. Tellingly, among other problems, there was little evidence that the merging parties overlapped significantly in competition between metal and glass containers for various end-use customers.
On appeal, the Supreme Court rejected the government’s theory, but reversed nevertheless. It used Philadelphia National Bank by constructing an artificial market definition consisting of the sales of only metal and glass containers.In this market, Continental Can had a 21.9 percent share and Hazel-Atlas had a 3.1 percent share,sufficient to find the merger illegal under Philadelphia National Bank.
Justice Marshall Harlan dissented, stating that the government had not even argued this as a proper market definition nor suggested it seriously on appeal.The majority’s approach, he argued, was arbitrary in including metal and glass but excluding other competitive packaging such as plastic, because “‘glass and metal containers’ form a distinct line of commerce only in the mind of this Court.” Justice Harlan called the majority’s market definition one in which it “chooses instead to invent a line of commerce the existence of which no one, not even the Government, has imagined; for which businessmen and economists will look in vain; a line of commerce which sprang into existence only when the merger took place, and will cease to exist when the merger is undone.”
The Court’s reasoning did show the use of the Philadelphia National Bank framework to achieve populist ends. Where the merging firms do overlap, setting market share thresholds very low achieved the populist objection to bigness, as happened in Brown Shoe and in subsequent case law. Yet, because the Court objected to bigness even in cases where the merging firms were large but did not compete significantly, it conjured market definitions to achieve the desired market shares to block mergers involving large firms. Justice Harlan concluded that the “Court’s spurious market share analysis should not obscure the fact that the Court is, in effect, laying down a ‘per se’ rule that mergers between two large companies in related industries are presumptively unlawful under 7.”
Another problematic aspect of Warren Court decisions was hostility to efficiency as in Brown Shoe. Into the 1970s, being more competitive was often considered bad, prompting lawyers from the merging parties to deny the merger had any such pro-consumer benefits. Thus, one study of FTC administrative litigation in the 1970s found eight of eighteen cases in which efficiencies were used as a basis for illegality or lack of efficiencies as a basis for legality. No case had considered efficiencies as supporting legality.
This aberrant case law, long condemned across all parts of the antitrust community, that President Biden’s antitrust leadership use for significant support as they began to rewrite the merger guidelines. Thus, the January 2022 FTC-DOJ request for information on merger enforcement contains 15 references to merger decisions, 12 of which are from before the 1980s, five from Brown Shoe alone. And two of the three more recent cases are cited to support propositions more consistent with the older era of merger enforcement than with modern law. Ten of the 12 older cites are to cases in the 1960s, the height of agency and judicial deviation from applying modern economics to mergers. Although there are non-controversial aspects of that law, that document’s disregard of intervening case law, especially from the disfavored 40 years, appeared neither accidental nor promising for the future well-being of American consumers. Chair Khan’s September 2020 speech is even more explicit in expressing “fidelity” to the “controlling precedents” discussed in this section.
Of course, one does not know what the actual merger guidelines the Biden forces produce will say. One does know their condemnation of modern guidelines and their fealty to the law of another era. Rather than write such law into their guidelines, they may follow the example of the FTC in its recently released statement of unfair methods of competition. There, despite case law that the new leadership claims as providing ample precedent for use of the Federal Trade Commission Act, Section 5, beyond the Sherman Act and Clayton Acts, the new FTC statement serves mainly to provide discretion to the agency regarding what practices it can attack.
When elected, Joe Biden could have turned to veterans of the Obama and Clinton administrations for the major competition jobs. Because he was vice president in the former, this course would have been unsurprising. Instead, he condemned both of those prior Democratic administrations as part of the 40-year “experiment failed.” His two agency leaders and White House Competition Council adviser call themselves Neo-Brandeisians and stoutly reject the bipartisan consensus that had developed in antitrust law in prior decades.
Affinity for the famous Louis Brandeis reveals an important part of their philosophy. Former White House Competition Council adviser Tim Wu titled his recent book The Curse of Bigness, in homage to Louis Brandeis’s famous 1914 essay in Harpers Weekly, “A Curse of Bigness.” Unfortunately, the Neo-Brandeisians do not appear to share the original’s passion for empiricism. The Brandeis brief, marshaling the facts and empirical record on a particular issue to place before the judiciary, was one of the most famous, and successful, innovations of Louis Brandeis before he became Justice Brandeis. Individuals who rely on studies of increased concentration that the authors themselves had repudiated, as with the 1948 FTC study, would not seem to share that empirical rigor. Moreover, the original Brandeis admitted that his preference against bigness would come with a price, as when he supported taxation on chain stores, recognizing the costs that such taxes might entail.
One of the most important attributes of the 40 years was the shared belief among judges, enforcers, and practitioners that the consumer was at the center of antitrust, guided by economic analysis. This analytical framework, often called the consumer welfare standard, is anathema to the Neo-Brandeisians, who regarded it as contributing to an alleged pro-business bias they reject. Because Warren Court merger law shared this characteristic of ignoring economic analysis and evidence when it conflicted with populist impulses, one is not surprised that the Biden officials find those cases attractive. The pro-government results are another obvious, appealing characteristic of the 1960s decisions.
Those decisions exhibited considerable deference to the government’s prosecutorial decisions. The FTC’s recent statement on unfair methods of competition is best understood in these terms. Rather than providing guidance to the business community on how the FTC will apply its statute beyond the antitrust laws, as Commissioner Christine S. Wilson’s dissent discusses in detail, the statement is less about providing guidance than claiming broad discretion for the agency to “know it when [it sees] it.” Thus, one would not be surprised if the agencies seek broad discretion in their new guidelines.
A return to strong, simple structural presumptions would provide both guidance and agency discretion. However attractive old Supreme Court decisions may make structural presumptions, such guidance would be wrong on the facts, wrong on the law, and wrong on policy. Wrong on the facts because, as in the 1960s with the simple market concentration doctrine and with reliance on the FTC’s fatally flawed 1948 study, there is no empirical support today to fear harm from rising concentration and, hence, no support for renewed structural rules. Wrong on the law because, led by the Supreme Court, the courts have rejected the anti-consumer Warren Court antitrust jurisprudence. Although the Court has not decided a contested case involving substantive merger analysis in nearly 50 years, there is every reason to believe that it would apply its consumer-centric approach used elsewhere to mergers. Finally, wrong on policy because modern merger law, well reflected in the 2010 Horizontal Merger Guidelines, their acceptance by the judiciary, and the daily practice before the agencies, also reflects that consumer-centric philosophy.
Antitrust is at the proverbial road fork, with revolutionaries who hold key government positions demanding a return to policies rejected long ago. Yet, those policies were rejected for sound reasons, often based on hard-won experience. Unfortunately, the Biden leadership seems intent on forcing the antitrust world and consumers to relearn those painful lessons.
* Bruce H. Kobayashi is currently the Paige V. and Henry N. Butler Professor of Law and Economics at the Antonin Scalia Law School, where he has been on the faculty since 1992. He was Director of the Federal Trade Commission’s (FTC) Bureau of Economics from May 2018 to December 2019. He also was the founding director of the Global Antitrust Institute and has taught antitrust economics to hundreds of foreign competition officials and judges. Kobayashi has also served as an instructor for the Law and Economics Center’s Judicial Education Program at George Mason University, where he has taught economics to hundreds of U.S. federal and state judges. He has authored over 70 books, monographs, and articles addressing issues in law and economics, including the application of economics to antitrust, intellectual property, and consumer protection law.
Timothy J. Muris is currently a George Mason University Foundation Professor of Law at the Antonin Scalia Law School, where he has been on the faculty since 1988. He is also senior counsel at Sidley Austin, LLP. Muris was chairman of the Federal Trade Commission from June 2001 through August 2004, Director of the FTC’s Bureau of Consumer Protection from 1981 to 1983, Director of the Bureau of Competition from 1983 to 1985, and Assistant to the Director of the Office of Policy Planning and Evaluation from 1974 to 1976. He is the only person to serve as Director of both of the FTC’s enforcement Bureaus. Muris is also the author of over 130 books, monographs, and articles addressing issues in economics and law, especially antitrust law, raised over the course of his work at, study of, and advising clients before the FTC.
The authors thank the Competitive Enterprise Institute, Alden Abbot, Tad Lipsky, Ryan Young, and John Yun for comments on earlier drafts. All errors are our own.