GCR Q3 2019

When robots collude

Charley Connor

27 September 2019

When robots collude

Today, cartelists are less likely to meet in a smoke-filled room than in a chat room. Is the next step for them to avoid meeting at all? Charley Connor explores how artificial intelligence could enable companies to avoid both antitrust liability and competition.

Intelligent robots are easily identifiable from films. C-3PO from Star Wars, for example, shines in analytic ability but is humorously flawed in his lack of understanding about the workings of the human mind. In less lovable forms, artificial intelligence (AI) is often presented as a whirring, metallic machine with staccatoed, unrealistic movements. Its threat is clear from its unnaturalness, its lack of humanity.

However, science fiction has also shown an even more unnerving concept: artificial intelligence that goes unnoticed, that is treated as human because it can blend in with humans. Take Maria, the beautiful yet murderous robot of Fritz Lang’s 1927 film Metropolis; or the android assassins of the Terminator series.

The unsettling AI of science fiction is clashing with real-world legal principles of liability and intent, as antitrust authorities try to identify and combat collusive pricing algorithms. After years of debate, academics, enforcers and practitioners clearly aren’t sure when these uncertainties will actually arise.

First contact

This enforcement became public in 2015, when a former online seller of wall décor admitted to conspiring to fix poster prices on Amazon Marketplace through a pricing algorithm, in what the US Department of Justice (DOJ) said was its first online marketplace prosecution. The UK’s Competition and Markets Authority (CMA) soon brought a parallel case against two retailers for using collusive pricing algorithms to compare and automatically adjust the prices of posters they sold on Amazon. Because it had evidence of a clear anticompetitive agreement made between company executives, the CMA determined that the companies had operated a cartel.

According to the DOJ’s criminal complaint, David Topkins and his co-conspirators agreed to adopt specific pricing algorithms for certain posters, with the goal of coordinating price changes, and he wrote computer code that instructed company A’s algorithm to set prices in accordance with the fix. Commercially available algorithm-based pricing software operates by collecting competitor pricing information for a specific product sold on Amazon Marketplace and applying pricing rules set by the seller.

The prosecutions of the poster sellers sparked a flurry of speculation about future cartel enforcement involving artificial intelligence. The case was heralded as a “cartel for the digital age”. It was the topic of panel discussions at conferences; referenced by agency heads during speeches; and even used as a case study in the UK’s 2017 report to the Organisation for Economic Co-operation and Development. Yet competition authorities have since been reluctant to bring cases, and some officials have cautioned against blurring the line between collusion and smart algorithmic pricing.

Instead, enforcers are still debating how – and if – they can intervene to prevent collusion absent a human “meeting of the minds”. It is unclear whether authorities can penalise a company that never explicitly asked its pricing algorithm to collude. Should companies be liable for the anticompetitive conduct of their robots?

Ex machina

The Competition Commission of India indicated the difficulty of proving that the use of algorithms constitutes collusion when it dismissed a complaint against ride-hailing platforms Ola and Uber in 2018. The complainant, an independent lawyer named Samir Agrawal, alleged that Uber and Ola’s algorithms artificially manipulate supply and demand, guaranteeing higher fares for drivers who would otherwise compete against each other on price. This arrangement is akin to a “hub-and-spoke” price-fixing cartel, he claimed, with the rideshare companies’ algorithms acting as a hub for collusion between the driver spokes. Agrawal also alleged that Uber and Ola’s pricing models represented a minimum resale price maintenance agreement between the companies and their drivers.

But India’s competition authority dismissed both claims. It said the ride-hailing platforms’ pricing algorithms could not be compared to a traditional hub-and-spoke cartel, which typically involves a set of vertical agreements between retailers (spokes) and a common supplier (the hub). The hub serves as a middle­man to facilitate collusion between the spokes, which lack direct agreements among themselves.

Agreeing to the algorithmically set prices does not constitute collusion between the drivers, the Indian enforcer said in its decision. “In the case of ride-sourcing and ride-sharing services, a hub-and-spoke cartel would require an agreement between all drivers to set prices through the platform, or an agreement for the platform to coordinate prices between them,” the commission said – and no such agreement among the drivers was in evidence.

The enforcer dismissed the resale price maintenance claim because no “resale” takes place when Uber and Ola match drivers to consumers. App-based taxi companies do not sell any service to drivers that the drivers resell to the riders. Assuming that an algorithm-determined price will always be higher than the prices negotiated on an individual trip basis – thereby eliminating price competition – is an “erroneous conclusion” unsupported by evidence, the authority said.

A US user of Uber similarly sued the company in New York federal court in 2015, alleging that Uber’s pricing algorithm amounted to a hub-and-spoke conspiracy. Although the case survived an initial motion for summary judgment, Judge Jed Rakoff ruled in 2017 that the claim was subject to arbitration.

The meagre enforcement against pricing algorithms accused of facilitating collusion suggests that antitrust authorities aren’t sure how to tackle this conduct in the absence of evidence of clearly anticompetitive intent.

The earth stood still

Algorithmic pricing is clearly throwing up uncertainties for enforcers, particularly when they are confronted with technological advances that could give machines the ability to collude without human intervention. How can authorities penalise conduct if they cannot tell whether a human or a machine is making the unlawful decisions?

Enforcers in Europe, the United States and Australia have all recently considered the issue – and all admit that competition policy may not be prepared to deal with this type of market manipulation. The work undertaken by these authorities shows that enforcers across the world are still grappling with whether the dystopia of science fiction movies, where the actions of humans and robots are indistinguishable, is actually becoming reality.

Australia

Speaking at a conference in Sydney in November 2017, Rod Sims, chair of the Australian Competition and Consumer Commission, said the authority has not seen any anticompetitive algorithms that require an enforcement response “beyond what is now available to the ACCC under Australian law”. However, he acknowledged that technological advances could soon lead to an artificially intelligent robot engaging in sustained collusion with another robot.

“What happens in that situation? My answer is. . . let’s wait and see,” he said. At the same time, Sims warned companies not to use algorithms with this ability to collude. “In Australia, we take the view that you cannot avoid liability by saying ‘my robot did it’,” he said.

UK

Last year, the UK’s competition authority produced a report on pricing algorithms, which outlined three different ways that algorithms can facilitate or create tacit collusion. Hub-and-spoke cartels pose the most immediate risk to competition, it said, while “predictable agent” and “autonomous machine” collusion can occur if the algorithms are “sufficiently advanced and widespread”.

Hub-and-spoke cartels emerge when sellers use the same algorithm or data pool to determine price, and agree with a supplier to abide by that price. A “more serious situation” occurs when competitors use a common intermediary that provides algorithmic pricing services, the CMA said, as this may result in a hub-and-spoke-like framework even though competitors do not expressly fix prices.

The predictable agent theory of collusion occurs when humans independently design pricing algorithms that all react to external factors in a predictable way. These machines reduce “strategic uncertainty”, the CMA said, which may help sustain a tacitly coordinated outcome.

If the AI is sufficiently advanced, it can teach itself to collude even when it was not specifically designed to do so. This form of tacit collusion, which the CMA termed “autonomous machine” collusion, occurs when competitors unilaterally design an algorithm to reach a predetermined goal, such as profit maximisation. The algorithm can then experiment with the optimal pricing strategy, leading it to enhance market transparency and tacitly collude.

The main impact of pricing algorithms appears to be that they can exacerbate traditional risk factors such as transparency and the speed of price setting, the CMA said. As algorithms can almost instantly observe all competitors’ prices, detect any deviation, and implement a price response that is objective and easily understandable by competitors, the UK authority said they certainly facilitate and mask intentional collusion. The probability of this actually happening, it suggested, depends on how competitive a given market already is.

“In our tentative view, it seems less likely than not that the increasing use of data and algorithms would be so impactful that they could enable sustained collusion in markets that are currently highly competitive, or those with very differentiated products, many competitors, and low barriers to entry and expansion,” the UK watchdog concluded. But in markets already susceptible to coordination, the CMA said, “the increasing use of data and algorithmic pricing may be the ‘last piece of the puzzle’ that could allow suppliers to move to a coordinated equilibrium.”

Europe

In a submission to the European Commission’s Directorate-General for Competition, economic consultancy Oxera similarly endeavoured to distinguish different kinds of algorithms. It noted that “adaptive” algorithms must include code that reveals intent to collude. If an algorithm contains such a code, competition authorities can establish collusion as there is a clear meeting of the minds, Oxera said. However, learning algorithmic pricing relies on artificial intelligence – and on machine learning in particular – to “experiment to learn the consequences of possible suboptimal prices”.

Oxera said the degree to which such collusion among algorithms is likely to happen in practice is not yet clear. Collusion is a “daunting task” because the algorithms must coordinate on both a collusive outcome, and a mechanism to detect and punish companies that deviate from the intended collusive outcome, it said.

Germany’s Federal Cartel Office and France’s Competition Authority are working on a joint project about algorithmic pricing’s impact on competition policy. Sebastian Wismer, a member of the German enforcer’s digital unit, said at the Centre for Competition Policy conference in June that the project is a “piece of the puzzle” to help enforcers deal with competition challenges in the digitalised world.

Enforcers should not investigate an algorithm’s functioning just because it happens to be used by a company under investigation, Wisemer said. But he proposed using several tools – many of which enforcers already possess – on a case-by-case approach.

First, investigating agencies can ask the company using the algorithm for a description of its “implementing principles”, Wisemer said. For example, an explanation of the inputs and outputs could help an authority identify exactly what the algorithm aims to achieve.

Then, enforcers should ask what the algorithm’s role is within the business. “We might ask for internal documents as potential evidence,” Wisemer said. These documents can include specifications for the algorithms, user manuals or code used in the development phase. In extreme cases, authorities could ask for part of the algorithm’s source code. Obtaining such a code is “not the easiest exercise or our first choice”, Wisemer admitted, but could help an authority approximate or recreate the algorithm within a “sanitised framework”, like a regulatory sandbox.

However, enforcers must still resolve whether companies are liable for what their algorithms did, Wismer said. Is the AI unilaterally and intelligently adapting to existing market conduct, or is it engaging in anticompetitive collusion on behalf of a company?

While his instinct is to treat an algorithm “just like you would treat an employee named Bob”, Wismer acknowledged that the question of liability has not yet arisen in case law – but that it will be crucial to resolve before authorities can begin truly enforcing in this area.

He advised enforcers to focus on identifying the “emergence” of collusion, rather than the method of collusion, to ascertain liability. Algorithms may be worse than humans at initiating collusion because algorithms cannot identify “focal points” for starting a cartel in the same way that humans can, he said.

Germany’s Monopolies Commission, an expert committee that advises the government on competition policy, has also looked at methods of enforcement when it comes to pricing algorithms. In a 2018 report, it said linking a commercial decision and the algorithmic pricing in the market may be difficult, particularly in “highly dynamic and very transparent” digital markets.

Cartelists in this sector can collude more tacitly than those operating in non-digital markets, it said. At the same time, companies can adapt their prices more quickly to changing market conditions than in non-digital markets. Enforcers should therefore monitor specific markets for “collusive risks” brought about by algorithmic pricing, the commission said, particularly through sector inquiries. It recommended that consumer associations obtain the right to initiate sector inquiries, as these associations are the most likely to obtain relevant information on “possible collusive overpricing”.

If an inquiry yields “concrete indications” that the use of pricing algorithms contributes considerably to collusive market outcomes, the German commission suggested reversing the traditional burden of proof with regard to liability for the economic damage caused. Companies using pricing algorithms would therefore have to show that the algorithm itself did not contribute to collusive outcomes, rather than the enforcer having to prove that it did.

The Monopolies Commission recommended EU-wide supplementary rules to “neutralise” the ability of pricing algorithms to collude. It also called for specific rules that impute liability to third parties who, by contributing their information technology expertise to algorithmic pricing, enhance collusive market outcomes. Such a heightened responsibility would deter software developers from building collusive tendencies into pricing algorithms in the first place, the commission said.

In July, Portugual’s Competition Authority released its own study into the competitive impact of algorithms, which found that retailers often use algorithms to monitor market prices in real-time and then adjust their prices in response. The authority surveyed 38 retail companies active in Portugal’s digital economy about their use of monitoring algorithms, of which 53% said they did not use algorithms. But 37% said they used specific software to monitor the prices of their competitors, and of those, 79% then adjusted their own prices based on the algorithm’s output.

The Portuguese enforcer echoed concerns raised by other authorities, but drew no conclusions about whether these algorithms are pro-competitive or anticompetitive. It said it had not yet found any evidence of intelligent machine learning that would lead to tacit collusion.

US

The distinction between tacit collusions (conscious parallelism) and explicit collusion is particularly crucial under US law. The former is generally legal; the latter can be criminally prosecuted.

At GCR Live Miami in February 2018, US Department of Justice deputy assistant attorney general Barry Nigro emphasised that conscious parallelism and interdependent pricing are not illegal under US antitrust law. To bring criminal or civil charges, he said, the US agencies need evidence of communication between sellers or their algorithms that rises to the level of an agreement.

“Agreement to me is the key,” Nigro said, voicing scepticism that one could deem an algorithm to be a hub-and-spoke arrangement simply based on the “spokes” all using a single price-setting mechanism.

In November 2018, the US Federal Trade Commission (FTC) held public hearings on competition issues associated with the use of algorithms, artificial intelligence and predictive analytics in business decisions and conduct. Economic and legal experts agreed that enforcers lack tools to deal with collusion absent a clear meeting of the minds, such as collusion caused by machine learning.

The hearings raised questions for enforcers about how to tackle collusion that was not orchestrated by humans. “You can’t put a machine in jail, for example,” quipped the director of the FTC’s bureau of competition, Bruce Hoffman, during his opening remarks.

He suggested that part of the problem lies with a fundamental lack of knowledge about how AI works. “There is a lot of discussion among lawyers about the implications of artificial intelligence and algorithms, and I discovered from talking about them that I think there is literally no one in the room who understood anything about how those technologies worked,” Hoffman said.

Maurice Stucke, a professor at the University of Tennessee College of Law and cofounder of the Konkurrenz Group, warned at the FTC hearing that agencies’ tools may not be “up to snuff” when it comes to identifying and combating algorithmic collusion.

“It’s very important for the FTC not to discount this as Terminator, but rather to take this seriously – like many of the European officials – and start devoting resources to it,” he said.

Stucke put forward four proposals for enforcers: invest in better understanding the risks of algorithmic pricing, predominantly through market studies and research projects; improve their tools to detect collusion, by auditing algorithms in specific markets to better understand under what circumstances they tend toward collusion, and establishing “dedicated units” within the agency that specialise in artificial intelligence; refine the tools for merger enforcement to avert highly concentrated markets that enable collusion; and work to destabilise tacit collusion, instead of just focusing on ex post enforcement.

A new hope

But some academics argue that enforcers already have the tools to deal with colluding robots. If investigators just keep digging into companies’ internal documents, these scholars say, they’ll likely find some evidence of intent to either collude or build algorithms that can collude.

The Brattle Group economist Kai-Uwe Kühn said during the FTC hearing that AI-orchestrated coordination is “actually much harder than we always thought”. Research has focused thus far on “two-by-two games” where two market players have used algorithms to collude on two distinct products, he said. Throw in a third collusive algorithm and “it just all collapses”.

However, a study published in April by economists Emilio Calvano, Giacomo Calzolari, Vincenzo Denicolò and Sergio Pastorello found that AI-powered algorithms consistently learnt to collude, regardless of how many were active in a particular market. The study’s authors conducted experiments in controlled environments, finding in almost every case that the algorithms colluded to charge supracompetitive prices and punish those that deviated from the scheme.

“They leave no trace whatever of concerted action: they do not communicate with one another, nor have they been designed or instructed to collude,” the study said. “From the standpoint of competition policy, these findings should clearly ring a bell.”

That said, the authors concluded that more experimentation was needed under real market conditions. Further research should also consider the speed at which the algorithms learn to collude and the diversity of available algorithms, they said.

Ai Deng, an associate director at the economic consultancy Nera who spoke at the FTC hearing, said regulators can still enforce competition rules without an in-depth knowledge of how algorithms are developed, through old-fashioned cartel detection tools. Like the German Federal Cartel Office’s Sebastian Wismer, he suggested that enforcers look at documents that shed light on the “design goals” of the algorithm or indicate the developers may have modified or revised the algorithm to further the goal of tacit coordination. Even marketing materials could show the developers have promoted to potential customers their algorithm’s ability to elicit tacit coordination from competitors, Deng said.

Rosa Abrantes-Metz, managing director of Global Economics Group, spoke on the FTC panel with Deng. She said that econometric analysis has not conclusively determined whether algorithms lead to higher or lower prices. Authorities should continue to assess whether pricing algorithms are driving convergence towards lower competitive prices or towards higher, potentially collusive prices, she said.

Abrantes-Metz called for the FTC and other competition authorities to consider providing guidelines to market participants that explain what pricing algorithms should or should not do, or what information they can or cannot consider. Companies can then be held responsible for monitoring their algorithms’ competitive impact, she said – though this “may require revisiting our current notions of ‘tacit’ and ‘explicit’ collusion”.

Ariel Ezrachi, a professor of competition economics at the University of Oxford, believes in a more proactive enforcement approach. At a Centre for Competition Policy conference in London in June, he suggested that tacit collusion via algorithm is a much more imminent threat than first believed. It seems that algorithms can collude much better than humans, he said – and that justifies taking some sort of measured action to limit the possibility of their pushing prices up.

Testing algorithmic collusion in controlled or artificial environments such as a regulatory sandbox “may not produce outcomes”, as test environments fail to capture the industry awareness from which companies benefit when creating their algorithms, Ezrachi noted.

Yet he cautioned that any real-time action taken by authorities will lead to market distortions. Enforcers should, for now, focus on deterring hub-and-spoke cartels, he said – especially because some market players are “getting a little cocky” when it comes to these types of agreements. Combating hub-and-spoke cartels will not necessarily entail more enforcement action, Ezrachi added. For example, he said that guidelines on algorithmic pricing would be useful, as hub-and-spoke cartelists often operate in a “grey area” that could let them get away with collusion.

Even Deng, whose work previously advocated a restrained enforcement approach to such collusion, admits that technological advancements are bringing algorithmic collusion to the forefront. “I take a very evidence-based approach, and that can all change very quickly,” he says. “We need to keep a close eye on the field from an interdisciplinary standpoint.”

Deng emphasises that a technological understanding of pricing algorithms is crucial. “Antitrust folks have the interest but not the technical knowledge,” he says. “They need a better understanding of the risks in order to come up with solutions.”

So if colluding robots do not yet exist – and enforcers may not even be able to detect them once they do – why have so many academics, enforcers and practitioners continued to speculate about the issue?

Perhaps the topic of collusive pricing algorithms echoes wider anxieties about artificial intelligence: that it will one day outsmart humans to the point that people cannot be held responsible for what it does, yet it cannot be penalised for acting unlawfully.

But, at the moment, it seems that authorities can usually find some evidence of anticompetitive human intent behind a collusive machine. After all, humans still create and set the rules for the algorithms to follow – for now.

Previous Chapter:Face the nation

Comments Add your comment

Add Your Comment