Skip to main content
Top

2023 | OriginalPaper | Chapter

Reinforcement Learning for Multi-Agent Stochastic Resource Collection

Authors : Niklas Strauss, David Winkel, Max Berrendorf, Matthias Schubert

Published in: Machine Learning and Knowledge Discovery in Databases

Publisher: Springer Nature Switzerland

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Stochastic Resource Collection (SRC) describes tasks where an agent tries to collect a maximal amount of dynamic resources while navigating through a road network. An instance of SRC is the traveling officer problem (TOP), where a parking officer tries to maximize the number of fined parking violations. In contrast to vehicular routing problems, in SRC tasks, resources might appear and disappear by an unknown stochastic process, and thus, the task is inherently more dynamic. In most applications of SRC, such as TOP, covering realistic scenarios requires more than one agent. However, directly applying multi-agent approaches to SRC yields challenges considering temporal abstractions and inter-agent coordination. In this paper, we propose a novel multi-agent reinforcement learning method for the task of Multi-Agent Stochastic Resource Collection (MASRC). To this end, we formalize MASRC as a Semi-Markov Game which allows the use of temporal abstraction and asynchronous actions by various agents. In addition, we propose a novel architecture trained with independent learning, which integrates the information about collaborating agents and allows us to take advantage of temporal abstractions. Our agents are evaluated on the multiple traveling officer problem, an instance of MASRC where multiple officers try to maximize the number of fined parking violations. Our simulation environment is based on real-world sensor data. Results demonstrate that our proposed agent can beat various state-of-the-art approaches.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Appendix
Available only for authorised users
Literature
1.
go back to reference Bono, G., Dibangoye, J.S., Simonin, O., Matignon, L., Pereyron, F.: Solving multi-agent routing problems using deep attention mechanisms. IEEE Trans. Intell. Transp. Syst. 22(12), 7804–7813 (2020)CrossRef Bono, G., Dibangoye, J.S., Simonin, O., Matignon, L., Pereyron, F.: Solving multi-agent routing problems using deep attention mechanisms. IEEE Trans. Intell. Transp. Syst. 22(12), 7804–7813 (2020)CrossRef
3.
go back to reference Foerster, J., Farquhar, G., Afouras, T., Nardelli, N., Whiteson, S.: Counterfactual multi-agent policy gradients. In: AAAI, vol. 32 (2018) Foerster, J., Farquhar, G., Afouras, T., Nardelli, N., Whiteson, S.: Counterfactual multi-agent policy gradients. In: AAAI, vol. 32 (2018)
4.
go back to reference Gendreau, M., Laporte, G., Séguin, R.: Stochastic vehicle routing. Eur. J. Oper. Res. 88(1), 3–12 (1996)CrossRefMATH Gendreau, M., Laporte, G., Séguin, R.: Stochastic vehicle routing. Eur. J. Oper. Res. 88(1), 3–12 (1996)CrossRefMATH
7.
go back to reference Hu, J., Wellman, M.P., et al.: Multiagent reinforcement learning: theoretical framework and an algorithm. In: ICML, vol. 98, pp. 242–250. Citeseer (1998) Hu, J., Wellman, M.P., et al.: Multiagent reinforcement learning: theoretical framework and an algorithm. In: ICML, vol. 98, pp. 242–250. Citeseer (1998)
8.
go back to reference Kim, J., Kim, K.: Optimizing large-scale fleet management on a road network using multi-agent deep reinforcement learning with graph neural network. In: ITSC, pp. 990–995. IEEE (2021) Kim, J., Kim, K.: Optimizing large-scale fleet management on a road network using multi-agent deep reinforcement learning with graph neural network. In: ITSC, pp. 990–995. IEEE (2021)
11.
go back to reference Kumar, S.N., Panneerselvam, R.: A survey on the vehicle routing problem and its variants (2012) Kumar, S.N., Panneerselvam, R.: A survey on the vehicle routing problem and its variants (2012)
12.
go back to reference Li, M., et al.: Efficient ridesharing order dispatching with mean field multi-agent reinforcement learning. In: The world wide web conference, pp. 983–994 (2019) Li, M., et al.: Efficient ridesharing order dispatching with mean field multi-agent reinforcement learning. In: The world wide web conference, pp. 983–994 (2019)
13.
go back to reference Liu, Z., Li, J., Wu, K.: Context-aware taxi dispatching at city-scale using deep reinforcement learning. IEEE Trans. Intell. Transp. Syst. 99, 1–14 (2020) Liu, Z., Li, J., Wu, K.: Context-aware taxi dispatching at city-scale using deep reinforcement learning. IEEE Trans. Intell. Transp. Syst. 99, 1–14 (2020)
14.
go back to reference Makar, R., Mahadevan, S., Ghavamzadeh, M.: Hierarchical multi-agent reinforcement learning. In: Proceedings of the fifth International Conference on Autonomous agents, pp. 246–253 (2001) Makar, R., Mahadevan, S., Ghavamzadeh, M.: Hierarchical multi-agent reinforcement learning. In: Proceedings of the fifth International Conference on Autonomous agents, pp. 246–253 (2001)
16.
go back to reference Nazari, M., Oroojlooy, A., Snyder, L., Takác, M.: Playing Atari with deep reinforcement learning. In: Advance Neural Information Processing System, vol. 31 (2018) Nazari, M., Oroojlooy, A., Snyder, L., Takác, M.: Playing Atari with deep reinforcement learning. In: Advance Neural Information Processing System, vol. 31 (2018)
18.
go back to reference Qin, K.K., Shao, W., Ren, Y., Chan, J., Salim, F.D.: Solving multiple travelling officers problem with population-based optimization algorithms. Neural Comput. Appl. 32(16), 12033–12059 (2020)CrossRef Qin, K.K., Shao, W., Ren, Y., Chan, J., Salim, F.D.: Solving multiple travelling officers problem with population-based optimization algorithms. Neural Comput. Appl. 32(16), 12033–12059 (2020)CrossRef
19.
go back to reference Rashid, T., Samvelyan, M., Schroeder, C., Farquhar, G., Foerster, J., Whiteson, S.: QMIX: monotonic value function factorisation for deep multi-agent reinforcement learning. In: ICML, pp. 4295–4304. PMLR (2018) Rashid, T., Samvelyan, M., Schroeder, C., Farquhar, G., Foerster, J., Whiteson, S.: QMIX: monotonic value function factorisation for deep multi-agent reinforcement learning. In: ICML, pp. 4295–4304. PMLR (2018)
20.
go back to reference Rohanimanesh, K., Mahadevan, S.: Learning to take concurrent actions. In: Advance Neural Information Processing System, vol. 15 (2002) Rohanimanesh, K., Mahadevan, S.: Learning to take concurrent actions. In: Advance Neural Information Processing System, vol. 15 (2002)
21.
go back to reference Schmoll, S., Schubert, M.: Semi-markov reinforcement learning for stochastic resource collection. In: IJCAI, pp. 3349–3355 (2021) Schmoll, S., Schubert, M.: Semi-markov reinforcement learning for stochastic resource collection. In: IJCAI, pp. 3349–3355 (2021)
22.
go back to reference Shao, W., Salim, F.D., Gu, T., Dinh, N.T., Chan, J.: Traveling officer problem: managing car parking violations efficiently using sensor data. IEEE Internet Things J. 5(2), 802–810 (2017)CrossRef Shao, W., Salim, F.D., Gu, T., Dinh, N.T., Chan, J.: Traveling officer problem: managing car parking violations efficiently using sensor data. IEEE Internet Things J. 5(2), 802–810 (2017)CrossRef
23.
go back to reference Sukhbaatar, S., Fergus, R., et al.: Learning multiagent communication with backpropagation. In: Advance Neural Information Processing System, vol. 29 (2016) Sukhbaatar, S., Fergus, R., et al.: Learning multiagent communication with backpropagation. In: Advance Neural Information Processing System, vol. 29 (2016)
24.
25.
go back to reference Tampuu, A., Matiisen, T., Kodelja, D., Kuzovkin, I., Korjus, K., Aru, J., Aru, J., Vicente, R.: Multiagent cooperation and competition with deep reinforcement learning. PLoS ONE 12(4), e0172395 (2017)CrossRef Tampuu, A., Matiisen, T., Kodelja, D., Kuzovkin, I., Korjus, K., Aru, J., Aru, J., Vicente, R.: Multiagent cooperation and competition with deep reinforcement learning. PLoS ONE 12(4), e0172395 (2017)CrossRef
26.
go back to reference Tan, M.: Multi-agent reinforcement learning: Independent vs. cooperative agents. In: ICML, pp. 330–337 (1993) Tan, M.: Multi-agent reinforcement learning: Independent vs. cooperative agents. In: ICML, pp. 330–337 (1993)
27.
go back to reference Tang, H., et al.: Hierarchical deep multiagent reinforcement learning with temporal abstraction. arXiv preprint arXiv:1809.09332 (2018) Tang, H., et al.: Hierarchical deep multiagent reinforcement learning with temporal abstraction. arXiv preprint arXiv:​1809.​09332 (2018)
28.
go back to reference Tang, X., et al.: A deep value-network based approach for multi-driver order dispatching. In: Proceedings of the 25th ACM SIGKDD, pp. 1780–1790 (2019) Tang, X., et al.: A deep value-network based approach for multi-driver order dispatching. In: Proceedings of the 25th ACM SIGKDD, pp. 1780–1790 (2019)
29.
go back to reference Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: AAAI, vol. 30 (2016) Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: AAAI, vol. 30 (2016)
30.
go back to reference Vaswani, A., et al.: Attention is all you need. In: Advance Neural Information Processing System, vol. 30 (2017) Vaswani, A., et al.: Attention is all you need. In: Advance Neural Information Processing System, vol. 30 (2017)
31.
go back to reference Zheng, L., et al.: Magent: a many-agent reinforcement learning platform for artificial collective intelligence. In: AAAI, vol. 32 (2018) Zheng, L., et al.: Magent: a many-agent reinforcement learning platform for artificial collective intelligence. In: AAAI, vol. 32 (2018)
32.
go back to reference Zhou, M., et al.: Multi-agent reinforcement learning for order-dispatching via order-vehicle distribution matching. In: Proceedings of the 28th ACM Int’l Conf on Information and Knowledge Management, pp. 2645–2653 (2019) Zhou, M., et al.: Multi-agent reinforcement learning for order-dispatching via order-vehicle distribution matching. In: Proceedings of the 28th ACM Int’l Conf on Information and Knowledge Management, pp. 2645–2653 (2019)
Metadata
Title
Reinforcement Learning for Multi-Agent Stochastic Resource Collection
Authors
Niklas Strauss
David Winkel
Max Berrendorf
Matthias Schubert
Copyright Year
2023
DOI
https://doi.org/10.1007/978-3-031-26412-2_13

Premium Partner