Skip to main content

2022 | OriginalPaper | Buchkapitel

Learning to Act: A Reinforcement Learning Approach to Recommend the Best Next Activities

verfasst von : Stefano Branchi, Chiara Di Francescomarino, Chiara Ghidini, David Massimo, Francesco Ricci, Massimiliano Ronzani

Erschienen in: Business Process Management Forum

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The rise of process data availability has recently led to the development of data-driven learning approaches. However, most of these approaches restrict the use of the learned model to predict the future of ongoing process executions. The goal of this paper is moving a step forward and leveraging available data to learning to act, by supporting users with recommendations derived from an optimal strategy (measure of performance). We take the optimization perspective of one process actor and we recommend the best activities to execute next, in response to what happens in a complex external environment, where there is no control on exogenous factors. To this aim, we investigate an approach that learns, by means of Reinforcement Learning, the optimal policy from the observation of past executions and recommends the best activities to carry on for optimizing a Key Performance Indicator of interest. The validity of the approach is demonstrated on two scenarios taken from real-life data.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
In this paper we set \(\gamma =1\), hence equally weighting the reward obtained at each action points of the target actor.
 
2
The information on the average interest rate is extracted from the BPI2017 [5] dataset which contains data from the same financial institution.
 
3
We estimate the average salary of a bank employed in the Netherlands from https://​www.​salaryexpert.​com/​salary/​job/​banking-disbursement-clerk/​netherlands.
 
4
The complete MDP description is available at tinyurl.​com/​2p8aytrb.
 
5
The MDP actions in this scenario take into account, besides the activity name, also the 2-month interval (since the creation of the fine) in which the activity has been carried out (2months).
 
Literatur
1.
Zurück zum Zitat De Giacomo, G., Iocchi, L., Favorito, M., Patrizi, F.: Foundations for restraining bolts: Reinforcement learning with LTLF/LDLF restraining specifications. In: Proceedings of the 29th International Conference on Automated Planning and Scheduling, ICAPS 2018, pp. 128–136. AAAI Press (2019) De Giacomo, G., Iocchi, L., Favorito, M., Patrizi, F.: Foundations for restraining bolts: Reinforcement learning with LTLF/LDLF restraining specifications. In: Proceedings of the 29th International Conference on Automated Planning and Scheduling, ICAPS 2018, pp. 128–136. AAAI Press (2019)
2.
Zurück zum Zitat de Leoni, M., Dees, M., Reulink, L.: Design and evaluation of a process-aware recommender system based on prescriptive analytics. In: 2nd International Conference on Process Mining (ICPM 2020), pp. 9–16. IEEE (2020) de Leoni, M., Dees, M., Reulink, L.: Design and evaluation of a process-aware recommender system based on prescriptive analytics. In: 2nd International Conference on Process Mining (ICPM 2020), pp. 9–16. IEEE (2020)
6.
Zurück zum Zitat Dumas, M.: Constructing digital twins for accurate and reliable what-if business process analysis. In: Proceedings of the International Workshop on BPM Problems to Solve Before We Die (PROBLEMS 2021). CEUR Workshop Proceedings, vol. 2938, pp. 23–27. CEUR-WS.org (2021) Dumas, M.: Constructing digital twins for accurate and reliable what-if business process analysis. In: Proceedings of the International Workshop on BPM Problems to Solve Before We Die (PROBLEMS 2021). CEUR Workshop Proceedings, vol. 2938, pp. 23–27. CEUR-WS.org (2021)
8.
Zurück zum Zitat Fahrenkrog-Petersen, S.A., et al.: Fire now, fire later: alarm-based systems for prescriptive process monitoring. arXiv preprint arXiv:1905.09568 (2019) Fahrenkrog-Petersen, S.A., et al.: Fire now, fire later: alarm-based systems for prescriptive process monitoring. arXiv preprint arXiv:​1905.​09568 (2019)
10.
Zurück zum Zitat Hu, J., Niu, H., Carrasco, J., Lennox, B., Arvin, F.: Voronoi-based multi-robot autonomous exploration in unknown environments via deep reinforcement learning. IEEE Trans. Vehicul. Technol. 69(12), 14413–14423 (2020)CrossRef Hu, J., Niu, H., Carrasco, J., Lennox, B., Arvin, F.: Voronoi-based multi-robot autonomous exploration in unknown environments via deep reinforcement learning. IEEE Trans. Vehicul. Technol. 69(12), 14413–14423 (2020)CrossRef
11.
Zurück zum Zitat Huang, Z., van der Aalst, W., Lu, X., Duan, H.: Reinforcement learning based resource allocation in business process management. Data Know. Eng. 70(1), 127–145 (2011)CrossRef Huang, Z., van der Aalst, W., Lu, X., Duan, H.: Reinforcement learning based resource allocation in business process management. Data Know. Eng. 70(1), 127–145 (2011)CrossRef
16.
Zurück zum Zitat Massimo, D., Ricci, F.: Harnessing a generalised user behaviour model for next-poi recommendation. In: Proceedings of the 12th ACM Conference on Recommender Systems, RecSys 2018, pp. 402–406. ACM (2018) Massimo, D., Ricci, F.: Harnessing a generalised user behaviour model for next-poi recommendation. In: Proceedings of the 12th ACM Conference on Recommender Systems, RecSys 2018, pp. 402–406. ACM (2018)
19.
Zurück zum Zitat Park, G., Song, M.: Prediction-based resource allocation using LSTM and minimum cost and maximum flow algorithm. In: International Conference on Process Mining, ICPM 2019, pp. 121–128. IEEE (2019) Park, G., Song, M.: Prediction-based resource allocation using LSTM and minimum cost and maximum flow algorithm. In: International Conference on Process Mining, ICPM 2019, pp. 121–128. IEEE (2019)
23.
Zurück zum Zitat Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. The MIT Press, 2nd edn. (2018) Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. The MIT Press, 2nd edn. (2018)
Metadaten
Titel
Learning to Act: A Reinforcement Learning Approach to Recommend the Best Next Activities
verfasst von
Stefano Branchi
Chiara Di Francescomarino
Chiara Ghidini
David Massimo
Francesco Ricci
Massimiliano Ronzani
Copyright-Jahr
2022
DOI
https://doi.org/10.1007/978-3-031-16171-1_9

Premium Partner