Weitere Kapitel dieses Buchs durch Wischen aufrufen
Mobile messaging has become a trend in our daily lives, and is vital in supporting new services in smart cities. The current schema for messaging is to route all the messages between mobile users through a centralized server. This scheme, though reliable, creates very heavy load on the server. It is possible for users to communicate through peer-to-peer (P2P) connection, especially over urban networks characterized by heavy user traffic and dense network connectivity. P2P connections however do not provide the best user experience, as they are sometimes unreliable due to network coverage fluctuation. We propose an intelligent messaging framework based on reinforcement learning to strike a balance between reducing server load and improving user experience. The system learns and adapts in real-time to user mobility and messaging patterns. The adaptive system dynamically chooses between routing through the server and routing via P2P connection. As it does not rely on user location information, user privacy is thus preserved. Performance evaluation through simulation of user movement and messaging patterns demonstrates that the system is able to find the best messaging policy for users, achieves a well balance between heavy server load and unreliable communication, and provides a fine user messaging experience while reduces server load. We believe that this work is significant for future smart cities and urban networking where mobile messaging will be prominent among mobile users as well as mobile smart objects.
Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten
Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:
Mary Meeker (2015). Internet Trends Report of 2015 [Online], Available: http://www.kpcb.com/internet-trends
R. d. A. Oliveira; W. C. Brandão; H. T. Marques-Neto, “Characterizing User Behavior on a Mobile SMS-Based Chat Service”, XXXIII Brazilian Symposium on Computer Networks and Distributed Systems (SBRC), pp 130 – 139, 2015.
P.-L. To, C. Liao, J. C. Chiang, M.-L. Shih and C.-Y. Chang, “An empirical investigation of the factors affecting the adoption of Instant Messaging in organizations,” Original Research Article Computer Standards & Interfaces, vol. 30, no. 3, p. 148–156, 2008. CrossRef
O. O. Abiona1; A. I. Oluwaranti; T. Anjali; C. E. Onime; E. O. Popoola; G. A., “Architectural model for Wireless Peer-to-Peer (WP2P) file sharing for ubiquitous mobile devices”, IEEE International Conference on Electro/Information Technology, pp. 35-39, 2009.
Noor Musmayati Musa; Fauziah Redzuan, “Understanding user behavior towards mobile messaging application use in support for banking system”, 3rd International Conference on User Science and Engineering (i-USEr), pp. 269-274, 2014.
J. Maenpaa; V. Andersson; G. Camarillo; A. Keranen, “Impact of Network Address Translator Traversal on Delays in Peer-to-Peer Session Initiation Protocol”, pp 1-6, 2010.
I. F. Akyildiz; Wenye Wang, “The predictive user mobility profile framework for wireless multimedia networks”, IEEE/ACM Transactions on Networking, pp 1021 – 1035, 2004. CrossRef
D. Barth; S. Bellahsene; L. Kloul, “Mobility Prediction Using Mobile User Profiles”, IEEE 19th International Symposium on Modeling, Analysis & Simulation of Computer and Telecommunication Systems (MASCOTS), pp 286 – 294, 2011.
S. Khokhar; A. A. Nilsson, “Estimation of Mobile Trajectory in a Wireless Network: A Basis for User's Mobility Profiling for Mobile Trajectory Based Services”, Third International Conference on Sensor Technologies and Applications, pp. 69-74, 2009.
M. A. Bayir; M. Demirbas; N. Eagle, “Discovering spatiotemporal mobility profiles of cellphone users”, IEEE Int. Sym. on a World of Wireless, Mobile and Multimedia Networks, pp. 1-9, 2009.
G. Gupta; R. Garg, “Minimizing the cost of mobility management: distance-based scheme as a function of user's profile”, Wireless Communications and Networking, pp. 2075 – 2080, vol. 3, 2003.
T. Deng; X. Wang; P. Fan; K. Li, “Modeling and Performance Analysis of Tracking Area List-Based Location Management Scheme in LTE Networks”, IEEE Transactions on Vehicular Technology, 2015.
R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction. MIT Press, 1998.
J. Rosenberg, J. Weinberger-Dynamicsoft, C. Huitema-Microsoft, R. Mahy-Cisco, “STUN-Simple Traversal of User Datagram Protocol Through Network Address Translators,” RFC-3489, 2003.
Ha Tran Thi Thu; Jaehyung Park; Yonggwan Won; Jinsul Kim, “Combining STUN Protocol and UDP Hole Punching Technique for Peer-To-Peer Communication across Network Address Translation”, pp, 1 – 4, 2014.
Junnosuke Kuroda; Yasuichi Nakayama, “STUN-based connection sequence through symmetric NATs for TCP connection”, Network Operations and Management Symposium (APNOMS), pp. 1-4, 2011.
Yong Wang; Zhao Lu; Junzhong Gu, “Research on Symmetric NAT Traversal in P2P applications”, International Multi-Conference on Computing in the Global Information Technology, 2006.
K. S. Hwang; Y. J. Chen; C. J. Wu, “Fusion of Multiple Behaviors Using Layered Reinforcement Learning”, IEEE Trans. on Systems, Man, and Cybernetics - Part A: Systems and Humans, pp. 999 – 1004, vol. 42, 2012. CrossRef
X. Xu; C. Liu; S. X. Yang; D. Hu, “Hierarchical Approximate Policy Iteration with Binary-Tree State Space Decomposition”, IEEE Transactions on Neural Networks, pp. 1863 – 1877, vol. 22, 2011. CrossRef
K. S. Hwang; T. W. Yang; C. J. Lin, “Self Organizing Decision Tree Based on Reinforcement Learning and its Application on State Space Partition”, IEEE International Conference on Systems, Man and Cybernetics, pp. 5088 - 5093, vol. 6, 2006.
Min Wu; A. Yamashita; H. Asama, “Rule abstraction and transfer in reinforcement learning by decision tree”, IEEE/SICE International Symposium on System Integration (SII), pp. 529 – 534, 2012.
K. S. Hwang; Y. J. Chen, “Tree-like Function Approximator in Reinforcement Learning”, 33rd Annual Conference of the IEEE Industrial Electronics Society, pp. 904 – 907, 2007.
P. Boone; M. Barbeau; E. Kranakis, “Using time-of-day and location-based mobility profiles to improve scanning during handovers”, IEEE International Symposium on a World of Wireless Mobile and Multimedia Networks, pp. 1-6, 2010.
Le Tien Dung, T. Komeda ; M. Takagi, “Mixed Reinforcement Learning for Partially Observable Markov Decision Process”, International Symposium on Computational Intelligence in Robotics and Automation, pp. 7-12, 2007.
L. Li; A. Scaglione, “Learning hidden Markov sparse models”, Information Theory and Applications Workshop (ITA), pp. 1-13, 2013.
O. H. Hamid; F. H. Alaiwy; I. O. Hussien, “Uncovering cognitive influences on individualized learning using a hidden Markov models framework”, Global Summit on Computer & Information Technology (GSCIT), pp. 1-6, 2015.
Yanwen Wang, Hainan Chen, Xiaoling Wu, Lei Shu, “An energy-efficient SDN based sleep scheduling algorithm for WSNs”, Journal of Network and Computer Applications, pp. 39-45, 2016.
Dan Wu; Jinlong Wang; Rose Qingyang Hu; Yueming Cai; Liang Zhou, “Energy-Efficient Resource Sharing for Mobile Device-to-Device Multimedia Communications”, IEEE Transactions on Vehicular Technology, vol. 63, no. 5, pp. 2093-2103, 2014. CrossRef
Mohammad Ashraful Hoque; Matti Siekkinen; Jukka K. Nurminen, “Energy Efficient Multimedia Streaming to Mobile Devices — A Survey”, IEEE Communications Surveys & Tutorials, vol. 16, 2014.
Jin Zhao; B. K. Bose, “Evaluation of membership functions for fuzzy logic controlled induction motor drive”, IEEE 28th Annual Conference, vol. 1, pp. 229-234, 2002.
M. Moh, B. Chellappan, T.-S. Moh, and S. Venugopal, “Handoff mechanisms for IEEE 802.16 networks supporting intelligent transportation systems,” in Wireless Technologies for Intelligent Transportation Systems, edited by Ming-Tuo Zhou, Yang Zhang, and Lawrence Yang, published by Nova Science Pub., 2010.
R. Wong, T.-S. Moh, and M. Moh, “Semi-Supervised Learning BitTorrent Traffic Detection,” in Distributed Network Intelligence, Security and Applications, ed. by Qurban A. Memon, CRC Press - Taylor & Francis Group, USA, Apr 2013.
Behrooz Shahriari; Melody Moh; Teng-Sheng Moh, “Generic Online Learning for Partial Visible Dynamic Environment with Delayed Feedback: Online Learning for 5G C-RAN Load-Balancer”, International Conference on High Performance Computing & Simulation (HPCS), pp. 176-185, 2017.
Chia-Feng Juang, and Chia-Hung Hsu, “Reinforcement Ant Optimized Fuzzy Controller for Mobile-Robot Wall-Following Control”, IEEE Transactions on Industrial Electronics, Vol. 56, NO. 10. Oct. 2009.
United Nations Secretary-General’s high-level panel on global sustainability, “Resilient People, Resilient Planet: A future worth choosing,” 2012.
B. Shahriari and M. Moh, “Intelligent Mobile Messaging for Urban Networks – Adaptive Intelligent Messaging Based on Reinforcement Learning,” Proceedings of 12th IEEE Int. Conf. on Wireless and Mobile Computing, Networking and Communications (WiMob), New York, October 17-19, 2016.
C. Tsai and M. Moh, “Load Balancing in 5G Cloud Radio Access Networks Supporting IoT Communications for Smart Communities,” Proceedings of 2017 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Bilbao, Spain, Dec 2017.
Badis Hammi, Rida Khatoun, Sherali Zeadally, “IoT technologies for smart cities”, IEEE IET Networks, Vol. 7, 2017.
Walid Balid, Hazem H Refai, “On the development of self-powered iot sensor for real-time traffic monitoring in smart cities”, IEEE SENSORS, 2017.
Jay Lohokare, Reshul Dani, Ajit Rajurkar, Ameya Apte, “An IoT ecosystem for the implementation of scalable wireless home automation systems at smart city level”, IEEE Region 10 Conference, TENCON, 2017.
Su, Gary, Melody Moh. “Improving Energy Efficiency and Scalability for IoT Communications in 5G Networks.” Proc. of 12th ACM Int. Conf. on Ubiquitous Information Management and Communication (IMCOM), Langkawi, Malaysia, January 2018.
- Intelligent Mobile Messaging for Smart Cities Based on Reinforcement Learning
Neuer Inhalt/© ITandMEDIA, Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung/© astrosystem | stock.adobe.com