Skip to main content

2018 | OriginalPaper | Buchkapitel

ACM: Learning Dynamic Multi-agent Cooperation via Attentional Communication Model

verfasst von : Xue Han, Hongping Yan, Junge Zhang, Lingfeng Wang

Erschienen in: Artificial Neural Networks and Machine Learning – ICANN 2018

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The collaboration of multiple agents is required in many real world applications, and yet it is a challenging task due to partial observability. Communication is a common scheme to resolve this problem. However, most of the communication protocols are manually specified and can not capture the dynamic interactions among agents. To address this problem, this paper presents a novel Attentional Communication Model (ACM) to achieve dynamic multi-agent cooperation. Firstly, we propose a new Cooperation-aware Network (CAN) to capture the dynamic interactions including both the dynamic routing and messaging among agents. Secondly, the CAN is integrated into Reinforcement Learning (RL) framework to learn the policy of multi-agent cooperation. The approach is evaluated in both discrete and continuous environments, and outperforms competing methods promisingly.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
CANR-TRPO is built using the idea of [9], except that the TRPO algorithm is used here to build the strategy.
 
Literatur
1.
Zurück zum Zitat Chorowski, J.K., Bahdanau, D., Serdyuk, D., Cho, K., Bengio, Y.: Attention-based models for speech recognition. In: Advances in Neural Information Processing Systems, pp. 577–585 (2015) Chorowski, J.K., Bahdanau, D., Serdyuk, D., Cho, K., Bengio, Y.: Attention-based models for speech recognition. In: Advances in Neural Information Processing Systems, pp. 577–585 (2015)
2.
Zurück zum Zitat Dobbe, R., Fridovich-Keil, D., Tomlin, C.: Fully decentralized policies for multi-agent systems: an information theoretic approach. In: Advances in Neural Information Processing Systems, pp. 2945–2954 (2017) Dobbe, R., Fridovich-Keil, D., Tomlin, C.: Fully decentralized policies for multi-agent systems: an information theoretic approach. In: Advances in Neural Information Processing Systems, pp. 2945–2954 (2017)
3.
Zurück zum Zitat Foerster, J., Assael, Y., de Freitas, N., Whiteson, S.: Learning to communicate with deep multi-agent reinforcement learning. In: Advances in Neural Information Processing Systems, pp. 2137–2145 (2016) Foerster, J., Assael, Y., de Freitas, N., Whiteson, S.: Learning to communicate with deep multi-agent reinforcement learning. In: Advances in Neural Information Processing Systems, pp. 2137–2145 (2016)
4.
Zurück zum Zitat Foerster, J., Farquhar, G., Afouras, T., Nardelli, N., Whiteson, S.: Counterfactual multi-agent policy gradients. arXiv preprint arXiv:1705.08926 (2017) Foerster, J., Farquhar, G., Afouras, T., Nardelli, N., Whiteson, S.: Counterfactual multi-agent policy gradients. arXiv preprint arXiv:​1705.​08926 (2017)
5.
Zurück zum Zitat Foerster, J.N., Chen, R.Y., Al-Shedivat, M., Whiteson, S., Abbeel, P., Mordatch, I.: Learning with opponent-learning awareness. arXiv preprint arXiv:1709.04326 (2017) Foerster, J.N., Chen, R.Y., Al-Shedivat, M., Whiteson, S., Abbeel, P., Mordatch, I.: Learning with opponent-learning awareness. arXiv preprint arXiv:​1709.​04326 (2017)
8.
Zurück zum Zitat Hermann, K.M., et al.: Teaching machines to read and comprehend. In: Advances in Neural Information Processing Systems, pp. 1693–1701 (2015) Hermann, K.M., et al.: Teaching machines to read and comprehend. In: Advances in Neural Information Processing Systems, pp. 1693–1701 (2015)
9.
Zurück zum Zitat Hoshen, Y.: Vain: attentional multi-agent predictive modeling. In: Advances in Neural Information Processing Systems, pp. 2698–2708 (2017) Hoshen, Y.: Vain: attentional multi-agent predictive modeling. In: Advances in Neural Information Processing Systems, pp. 2698–2708 (2017)
10.
Zurück zum Zitat Hüttenrauch, M., Šošić, A., Neumann, G.: Learning complex swarm behaviors by exploiting local communication protocols with deep reinforcement learning. arXiv preprint arXiv:1709.07224 (2017) Hüttenrauch, M., Šošić, A., Neumann, G.: Learning complex swarm behaviors by exploiting local communication protocols with deep reinforcement learning. arXiv preprint arXiv:​1709.​07224 (2017)
11.
Zurück zum Zitat Kurek, M., Jaśkowski, W.: Heterogeneous team deep q-learning in low-dimensional multi-agent environments. In: 2016 IEEE Conference on Computational Intelligence and Games (CIG), pp. 1–8. IEEE (2016) Kurek, M., Jaśkowski, W.: Heterogeneous team deep q-learning in low-dimensional multi-agent environments. In: 2016 IEEE Conference on Computational Intelligence and Games (CIG), pp. 1–8. IEEE (2016)
12.
Zurück zum Zitat Lanctot, M., et al.: A unified game-theoretic approach to multiagent reinforcement learning. In: Advances in Neural Information Processing Systems, pp. 4191–4204 (2017) Lanctot, M., et al.: A unified game-theoretic approach to multiagent reinforcement learning. In: Advances in Neural Information Processing Systems, pp. 4191–4204 (2017)
13.
Zurück zum Zitat Leibo, J.Z., Zambaldi, V., Lanctot, M., Marecki, J., Graepel, T.: Multi-agent reinforcement learning in sequential social dilemmas. In: Proceedings of the 16th Conference on Autonomous Agents and Multi-agent Systems. pp. 464–473. International Foundation for Autonomous Agents and Multiagent Systems (2017) Leibo, J.Z., Zambaldi, V., Lanctot, M., Marecki, J., Graepel, T.: Multi-agent reinforcement learning in sequential social dilemmas. In: Proceedings of the 16th Conference on Autonomous Agents and Multi-agent Systems. pp. 464–473. International Foundation for Autonomous Agents and Multiagent Systems (2017)
14.
Zurück zum Zitat Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, P., Mordatch, I.: Multi-agent actor-critic for mixed cooperative-competitive environments. arXiv preprint arXiv:1706.02275 (2017) Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, P., Mordatch, I.: Multi-agent actor-critic for mixed cooperative-competitive environments. arXiv preprint arXiv:​1706.​02275 (2017)
15.
Zurück zum Zitat Mao, H., et al.: ACCNet: Actor-coordinator-critic net for “learning-to-communicate” with deep multi-agent reinforcement learning. arXiv preprint arXiv:1706.03235 (2017) Mao, H., et al.: ACCNet: Actor-coordinator-critic net for “learning-to-communicate” with deep multi-agent reinforcement learning. arXiv preprint arXiv:​1706.​03235 (2017)
16.
Zurück zum Zitat Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)CrossRef Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)CrossRef
17.
Zurück zum Zitat Schulman, J., Levine, S., Abbeel, P., Jordan, M., Moritz, P.: Trust region policy optimization. In: Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1889–1897 (2015) Schulman, J., Levine, S., Abbeel, P., Jordan, M., Moritz, P.: Trust region policy optimization. In: Proceedings of the 32nd International Conference on Machine Learning (ICML-15), pp. 1889–1897 (2015)
18.
Zurück zum Zitat da Silva, F.L., Glatt, R., Costa, A.H.R.: Simultaneously learning and advising in multi-agent reinforcement learning. In: Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems. pp. 1100–1108. International Foundation for Autonomous Agents and Multiagent Systems (2017) da Silva, F.L., Glatt, R., Costa, A.H.R.: Simultaneously learning and advising in multi-agent reinforcement learning. In: Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems. pp. 1100–1108. International Foundation for Autonomous Agents and Multiagent Systems (2017)
19.
Zurück zum Zitat Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)CrossRef Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016)CrossRef
20.
Zurück zum Zitat Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550(7676), 354 (2017)CrossRef Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550(7676), 354 (2017)CrossRef
21.
Zurück zum Zitat Sukhbaatar, S., Fergus, R., et al.: Learning multiagent communication with back propagation. In: Advances in Neural Information Processing Systems, pp. 2244–2252 (2016) Sukhbaatar, S., Fergus, R., et al.: Learning multiagent communication with back propagation. In: Advances in Neural Information Processing Systems, pp. 2244–2252 (2016)
22.
Zurück zum Zitat Tan, M.: Multi-agent reinforcement learning: independent vs. cooperative agents. In: Proceedings of the Tenth International Conference on Machine Learning, pp. 330–337 (1993)CrossRef Tan, M.: Multi-agent reinforcement learning: independent vs. cooperative agents. In: Proceedings of the Tenth International Conference on Machine Learning, pp. 330–337 (1993)CrossRef
Metadaten
Titel
ACM: Learning Dynamic Multi-agent Cooperation via Attentional Communication Model
verfasst von
Xue Han
Hongping Yan
Junge Zhang
Lingfeng Wang
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-030-01421-6_22