Skip to main content
main-content

Tipp

Weitere Artikel dieser Ausgabe durch Wischen aufrufen

01.12.2021 | Research | Ausgabe 1/2021 Open Access

Computational Social Networks 1/2021

Influence maximization in social media networks concerning dynamic user behaviors via reinforcement learning

Zeitschrift:
Computational Social Networks > Ausgabe 1/2021
Autoren:
Mengnan Chen, Qipeng P. Zheng, Vladimir Boginski, Eduardo L. Pasiliao
Wichtige Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Abstract

This study examines the influence maximization (IM) problem via information cascades within random graphs, the topology of which dynamically changes due to the uncertainty of user behavior. This study leverages the discrete choice model (DCM) to calculate the probabilities of the existence of the directed arc between any two nodes. In this IM problem, the DCM provides a good description and prediction of user behavior in terms of following or not following a neighboring user. To find the maximal influence at the end of a finite-time horizon, this study models the IM problem by using multistage stochastic programming, which can help a decision-maker to select the optimal seed nodes by which to broadcast messages efficiently. Since computational complexity grows exponentially with network size and time horizon, the original model is not solvable within a reasonable time. This study then uses two different approaches by which to approximate the optimal decision: myopic two-stage stochastic programming and reinforcement learning via the Markov decision process. Computational experiments show that the reinforcement learning method outperforms the myopic two-stage stochastic programming method.
Literatur
Über diesen Artikel

Weitere Artikel der Ausgabe 1/2021

Computational Social Networks 1/2021 Zur Ausgabe

Premium Partner