Skip to main content
Erschienen in: Review of Industrial Organization 1/2019

09.02.2019

Algorithmic Pricing What Implications for Competition Policy?

verfasst von: Emilio Calvano, Giacomo Calzolari, Vincenzo Denicolò, Sergio Pastorello

Erschienen in: Review of Industrial Organization | Ausgabe 1/2019

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Pricing decisions are increasingly in the “hands” of artificial algorithms. Scholars and competition authorities have voiced concerns that those algorithms are capable of sustaining collusive outcomes more effectively than can human decision makers. If this is so, then our traditional policy tools for fighting collusion may have to be reconsidered. We discuss these issues by critically surveying the relevant law, economics, and computer science literature.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Fußnoten
1
British Airways seems to have been the first company to use pricing algorithms in the 1970s.
 
2
The European Commission’s 2017 “Final report on the E-commerce Sector Inquiry” concludes that “A majority of retailers track the online prices of competitors. Two-thirds of them use software programs that autonomously adjust their own prices based on the observed prices of competitors”.
 
3
AP could provide a competing explanation (and thus an identification challenge) for the evidence of higher online (relative to offline) prices in some markets. The prevalent explanation is that of an increase in the match quality (Ellison and Ellison 2018). AP could also speak to the question of what is causing (online) price dispersion both in the cross-section and in the time-series in seemingly homogenous product markets (Chen et al. 2016).
 
4
For example, the New Yorker asked what happens “When bots collude” (April 25, 2015), and the Financial Times wrote of “Digital cartels” (January 8, 2017).
 
5
The Acting Chairman of the U.S. Federal Trade Commission M. Ohlhausen “Should We Fear the Things That Go Beep in the Night? Some Initial Thoughts on the Intersection of Antitrust Law and Algorithmic Pricing,” remarks at the “Concurrences Antitrust in the Financial Sector Conference,” New York, May 23, 2017. OECD Roundtable on Algorithms and Collusion, June 2017. The European Commissioner for Competition M. Vestager, “Algorithms and Competition,” remarks at the Bundeskarellamt 18th Conference on Competition, Berlin, March 16, 2017.
 
6
Wired Magazine; U.S. v. Topkins, 2015 and CMA case 2015 n. 50223.
 
7
See Olivia Solon, 2015, “How a book about flies came to be priced $24 million on Amazon”.
 
9
Harrington (2017) develops a legal approach for collusion with AP that is grounded in economic analysis.
 
10
The term may be imprecise as these algorithms learn only in a very limited sense, as we shall clarify below.
 
11
Whether this result survives when market conditions are not stationary-- and hence the estimation function of the pricing algorithm is active-- is an open question.
 
12
For example, the second-hand book episode seems to have been generated by adaptive algorithms which would set the own price as a multiple of the rival’s price. If two firms adopt a pricing rule of the type \(p_{i} = a_{i} p_{j}\), the system explodes whenever \(a_{i} a_{j} > 1.\) As a results prices may get very high indeed, but the outcome will be poor in terms of profit maximization.
 
13
Sannikov and Skrzypacz (2007) derive this result by assuming that agents optimally extract the signal from the noisy information that they receive. Whether the result continues to be true also for AP remains to be investigated.
 
14
ML techniques have been developed and are currently adopted for a large number of applications. By far the most popular in the social sciences are those of classifying tasks (with supervised learning) and those meant to uncover hidden structure in big datasets (with unsupervised learning).
 
15
If the environment is stationary, experimentation is crucial initially but may optimally vanish eventually; in a dynamic environment, in contrast, it may be optimal to keep experimenting forever.
 
16
For a textbook introduction to Q-learning see Sutton and Barto (1998).
 
17
To economists, the analysis of a MDP may be reminiscent of the analysis of Markov Perfect industry dynamics, which are summarized in Doraszelski and Pakes (2007). There are however substantial differences: In that literature, the numerical methods for equilibrium identification rely on the industry structure and solve systems of firms’ optimality conditions. Here instead, the algorithms are model free and learn with experimentation.
 
18
Q-learning is also related to the idea of active learning in game theory (see Fudenberg and Levine 2016). It is normally associated with the more general class of “reinforcement learning” models where an agent learns by interacting with the environment, perceiving its state, and taking actions with trials and errors. Those actions that are associated with a positive consequence (reward) then have higher chances of being chosen in the future: They are reinforced by the agent’s behavior.
 
19
Still, Busoniu et al. (2008) account for cases in which these algorithms have show good performances in terms of convergence.
 
20
For a comprehensive survey see Busoniu et al. (2008).
 
21
Note that this does not imply that the values of the Q matrix remain unchanged, but only that the ranking between Qi(s,1) and Qi(s,2) is unchanged.
 
22
In a celebrated experiment Mnih et al. (2015) show how a deep reinforcement ML algorithm learned to solve complex tasks such as playing classic Atari videogames better than humans using as the state the color of 210 × 160 pixels on the screen with a 128-colour palette and the scoreplay.
 
23
Other recent and promising examples about cooperation in a multi-agent setting is the work at Facebook AI Research: e.g., by Lerer and Peysakhovich (2018).
 
24
They are not sufficient because, for example, the Max operator in the formula for Q-learning may lead to non-unique solutions.
 
25
Cooper and Kühn (2014) have shown that communication between humans helps cooperation by clarifying how individuals think about the environment and whether they really mean punishing deviations and by making social punishments and rewards explicit.
 
26
Salcedo (2015) presents a theoretical model that shows that collusion with algorithms is not only possible, but also inevitable. However, the result relies on algorithms’ ability to read into other algorithms and thus learn their “intentions,” and on some commitment not to revise the algorithms in use.
 
27
With this respect it is interesting to notice that the use of learning and optimizing software may induce firms to behave more like the dynamic profit‐maximizing firms of standard economic theory.
 
Literatur
Zurück zum Zitat Bloembergen, D., Tuyls, K., Hennes, D., & Kaisers, M. (2015). Evolutionary dynamics of multi-agent learning: A survey. Journal of Artificial Intelligence Research, 53, 659–697.CrossRef Bloembergen, D., Tuyls, K., Hennes, D., & Kaisers, M. (2015). Evolutionary dynamics of multi-agent learning: A survey. Journal of Artificial Intelligence Research, 53, 659–697.CrossRef
Zurück zum Zitat Busoniu, L., Babuska, R., & De Schutter, B. (2008). A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics-Part C: Applications and Reviews, 38(2), 156–172.CrossRef Busoniu, L., Babuska, R., & De Schutter, B. (2008). A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics-Part C: Applications and Reviews, 38(2), 156–172.CrossRef
Zurück zum Zitat Calvano, E., Calzolari, G., Denicolò, V., & Pastorello, S. (2018). Artificial intelligence, algorithmic pricing and collusion. CEPR Discussion Paper13405. Calvano, E., Calzolari, G., Denicolò, V., & Pastorello, S. (2018). Artificial intelligence, algorithmic pricing and collusion. CEPR Discussion Paper13405.
Zurück zum Zitat Chen, L., Mislove, A., & Wilson, C. (2016). An empirical analysis of algorithmic pricing on Amazon marketplace. In Proceedings of the 25th international conference on world wide web. International World Wide Web Conferences Steering Committee. Chen, L., Mislove, A., & Wilson, C. (2016). An empirical analysis of algorithmic pricing on Amazon marketplace. In Proceedings of the 25th international conference on world wide web. International World Wide Web Conferences Steering Committee.
Zurück zum Zitat Cooper, R. W., DeJong, D. V., Forsythe, R., & Ross, T. W. (1990). Selection criteria in coordination games: Some experimental results. The American Economic Review, 80(1), 218–233. Cooper, R. W., DeJong, D. V., Forsythe, R., & Ross, T. W. (1990). Selection criteria in coordination games: Some experimental results. The American Economic Review, 80(1), 218–233.
Zurück zum Zitat Cooper, D. J., & Kühn, K. U. (2014). Communication, renegotiation, and the scope for collusion. American Economic Journal: Microeconomics, 6(2), 247–278. Cooper, D. J., & Kühn, K. U. (2014). Communication, renegotiation, and the scope for collusion. American Economic Journal: Microeconomics, 6(2), 247–278.
Zurück zum Zitat Crandall, J. W., Oudah, M., Tennom, Ishowo-Oloko, F., Abdallah, S., Bonnefon, J., et al. (2018). Cooperating with machines. Nature Communications, 9(233), 2018. Crandall, J. W., Oudah, M., Tennom, Ishowo-Oloko, F., Abdallah, S., Bonnefon, J., et al. (2018). Cooperating with machines. Nature Communications, 9(233), 2018.
Zurück zum Zitat Dogan, I., & Guner, A. R. (2015). A reinforcement learning approach to competitive ordering and pricing problem. Expert Systems, 32, 39–47.CrossRef Dogan, I., & Guner, A. R. (2015). A reinforcement learning approach to competitive ordering and pricing problem. Expert Systems, 32, 39–47.CrossRef
Zurück zum Zitat Doraszelski, U., & Pakes, A. (2007). A framework for applied dynamic analysis in IO. In M. Armstrong & R. H. Porter (Eds.), Handbook of industrial organization (Vol. 3, Chapter 4). Amsterdam: Elsevier Science. Doraszelski, U., & Pakes, A. (2007). A framework for applied dynamic analysis in IO. In M. Armstrong & R. H. Porter (Eds.), Handbook of industrial organization (Vol. 3, Chapter 4). Amsterdam: Elsevier Science.
Zurück zum Zitat Ellison, G., & Ellison, S. F. (2018). Match quality, search, and the Internet market for used books (No. w24197). National Bureau of Economic Research. Ellison, G., & Ellison, S. F. (2018). Match quality, search, and the Internet market for used books (No. w24197). National Bureau of Economic Research.
Zurück zum Zitat Erev, I., & Roth, A. E. (1998). Predicting how people play games: Reinforcement learning in experimental games with unique, mixed strategy equilibria. American Economic Review, 88, 848–881. Erev, I., & Roth, A. E. (1998). Predicting how people play games: Reinforcement learning in experimental games with unique, mixed strategy equilibria. American Economic Review, 88, 848–881.
Zurück zum Zitat Ezrachi, A., & Stucke, M. E. (2015). Artificial intelligence and collusion: When computers inhibit competition. Oxford Legal Studies Research Paper No. 18/2015, University of Tennessee Legal Studies Research Paper No. 267. Ezrachi, A., & Stucke, M. E. (2015). Artificial intelligence and collusion: When computers inhibit competition. Oxford Legal Studies Research Paper No. 18/2015, University of Tennessee Legal Studies Research Paper No. 267.
Zurück zum Zitat Fudenberg, D., & Levine, D. K. (2016). Whither game theory? Towards a theory of learning in games. The Journal of Economic Perspectives, 30(4), 151–169.CrossRef Fudenberg, D., & Levine, D. K. (2016). Whither game theory? Towards a theory of learning in games. The Journal of Economic Perspectives, 30(4), 151–169.CrossRef
Zurück zum Zitat Harrington, J. E. (2017). Developing competition law for collusion by autonomous agents. Working paper, The Wharton School, University of Pennsylvania. Harrington, J. E. (2017). Developing competition law for collusion by autonomous agents. Working paper, The Wharton School, University of Pennsylvania.
Zurück zum Zitat Hu, J., & Wellman, M. P. (2003). Nash Q-learning for general-sum stochastic games. Journal of machine learning research, 4, 1039–1069. Hu, J., & Wellman, M. P. (2003). Nash Q-learning for general-sum stochastic games. Journal of machine learning research, 4, 1039–1069.
Zurück zum Zitat Lerer, A., & Peysakhovich, A. (2018). Maintaining cooperation in complex social dilemmas using deep reinforcement learning. arXiv preprint arXiv:1707.01068. Lerer, A., & Peysakhovich, A. (2018). Maintaining cooperation in complex social dilemmas using deep reinforcement learning. arXiv preprint arXiv:​1707.​01068.
Zurück zum Zitat Mehra, S. K. (2016). Antitrust and the robo-seller: Competition in the time of algorithms. Minnesota Law Review, 100, 1323–75. Mehra, S. K. (2016). Antitrust and the robo-seller: Competition in the time of algorithms. Minnesota Law Review, 100, 1323–75.
Zurück zum Zitat Milgrom, P. R., & Roberts, D. J. (1990). Rationalizability, learning, and equilibrium in games with strategic complementarities. Econometrica, 58, 1255–1277.CrossRef Milgrom, P. R., & Roberts, D. J. (1990). Rationalizability, learning, and equilibrium in games with strategic complementarities. Econometrica, 58, 1255–1277.CrossRef
Zurück zum Zitat Roth, A. E., & Erev, I. (1995). Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term. Games and Economic Behavior, 8, 164–212.CrossRef Roth, A. E., & Erev, I. (1995). Learning in extensive-form games: Experimental data and simple dynamic models in the intermediate term. Games and Economic Behavior, 8, 164–212.CrossRef
Zurück zum Zitat Salcedo, B. (2015). Pricing algorithms and tacit collusion. Working paper Pennsylvania State University. Salcedo, B. (2015). Pricing algorithms and tacit collusion. Working paper Pennsylvania State University.
Zurück zum Zitat Sannikov, Y., & Skrzypacz, A. (2007). Impossibility of collusion under imperfect monitoring with flexible production. American Economic Review, 97, 1794–1823.CrossRef Sannikov, Y., & Skrzypacz, A. (2007). Impossibility of collusion under imperfect monitoring with flexible production. American Economic Review, 97, 1794–1823.CrossRef
Zurück zum Zitat Sarin, R., & Vahid, F. (2001). Predicting how people play games: A simple dynamic model of choice. Games and Economic Behavior, 34, 104–122.CrossRef Sarin, R., & Vahid, F. (2001). Predicting how people play games: A simple dynamic model of choice. Games and Economic Behavior, 34, 104–122.CrossRef
Zurück zum Zitat Sukhbaatar, S., Szlam, A., & Fergus, R. (2016). Learning multiagent communication with backpropagation. arXiv, preprint arXiv:1605.07736. Sukhbaatar, S., Szlam, A., & Fergus, R. (2016). Learning multiagent communication with backpropagation. arXiv, preprint arXiv:​1605.​07736.
Zurück zum Zitat Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge: MIT press. Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge: MIT press.
Zurück zum Zitat Tampuu, A., Matiisen, T., Kodelja, D., Kuzovkin, I., Korjus, K., Aru, J., et al. (2017). Multiagent cooperation and competition with deep reinforcement learning. PLoS ONE, 12(4), e0172395.CrossRef Tampuu, A., Matiisen, T., Kodelja, D., Kuzovkin, I., Korjus, K., Aru, J., et al. (2017). Multiagent cooperation and competition with deep reinforcement learning. PLoS ONE, 12(4), e0172395.CrossRef
Zurück zum Zitat Tesauro, G., & Kephart, J. O. (2002). Pricing in agent economics using multi-agent Q-learning. Autonomous Agents and Multi-Agent Systems, 5, 289–304.CrossRef Tesauro, G., & Kephart, J. O. (2002). Pricing in agent economics using multi-agent Q-learning. Autonomous Agents and Multi-Agent Systems, 5, 289–304.CrossRef
Zurück zum Zitat Waltman, L., & Kaymak, U. (2008). Q-learning agents in a Cournot oligopoly model. Journal of Economic Dynamics and Control, 32(10), 3275–3293.CrossRef Waltman, L., & Kaymak, U. (2008). Q-learning agents in a Cournot oligopoly model. Journal of Economic Dynamics and Control, 32(10), 3275–3293.CrossRef
Zurück zum Zitat Watkins, C. J. C. H., & Dayan, P. (1992). Q-learning. Machine Learning, 8(279), 292. Watkins, C. J. C. H., & Dayan, P. (1992). Q-learning. Machine Learning, 8(279), 292.
Zurück zum Zitat Xie, M., & Chen, J. (2004). Studies on horizontal competition among homogeneous retailers through agent-based simulations. Journal of Systems Science and Systems Engineering, 13, 490–505.CrossRef Xie, M., & Chen, J. (2004). Studies on horizontal competition among homogeneous retailers through agent-based simulations. Journal of Systems Science and Systems Engineering, 13, 490–505.CrossRef
Metadaten
Titel
Algorithmic Pricing What Implications for Competition Policy?
verfasst von
Emilio Calvano
Giacomo Calzolari
Vincenzo Denicolò
Sergio Pastorello
Publikationsdatum
09.02.2019
Verlag
Springer US
Erschienen in
Review of Industrial Organization / Ausgabe 1/2019
Print ISSN: 0889-938X
Elektronische ISSN: 1573-7160
DOI
https://doi.org/10.1007/s11151-019-09689-3

Weitere Artikel der Ausgabe 1/2019

Review of Industrial Organization 1/2019 Zur Ausgabe