skip to main content
research-article

Transfer Learning with Dynamic Distribution Adaptation

Authors Info & Claims
Published:06 February 2020Publication History
Skip Abstract Section

Abstract

Transfer learning aims to learn robust classifiers for the target domain by leveraging knowledge from a source domain. Since the source and the target domains are usually from different distributions, existing methods mainly focus on adapting the cross-domain marginal or conditional distributions. However, in real applications, the marginal and conditional distributions usually have different contributions to the domain discrepancy. Existing methods fail to quantitatively evaluate the different importance of these two distributions, which will result in unsatisfactory transfer performance. In this article, we propose a novel concept called Dynamic Distribution Adaptation (DDA), which is capable of quantitatively evaluating the relative importance of each distribution. DDA can be easily incorporated into the framework of structural risk minimization to solve transfer learning problems. On the basis of DDA, we propose two novel learning algorithms: (1) Manifold Dynamic Distribution Adaptation (MDDA) for traditional transfer learning, and (2) Dynamic Distribution Adaptation Network (DDAN) for deep transfer learning. Extensive experiments demonstrate that MDDA and DDAN significantly improve the transfer learning performance and set up a strong baseline over the latest deep and adversarial methods on digits recognition, sentiment analysis, and image classification. More importantly, it is shown that marginal and conditional distributions have different contributions to the domain divergence, and our DDA is able to provide good quantitative evaluation of their relative importance, which leads to better performance. We believe this observation can be helpful for future research in transfer learning.

References

  1. Mahsa Baktashmotlagh, Mehrtash Harandi, and Mathieu Salzmann. 2016. Distribution-matching embedding for visual domain adaptation. J. Mach. Learn. Res. 17, 1 (2016), 3760--3789.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, and Mathieu Salzmann. 2013. Unsupervised domain adaptation by domain invariant projection. In Proceedings of the IEEE International Conference on Computer Vision. 769--776.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Mahsa Baktashmotlagh, Mehrtash T. Harandi, Brian C. Lovell, and Mathieu Salzmann. 2014. Domain adaptation on the statistical manifold. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2481--2488.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. 2006. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. J. Mach. Learn. Res. 7 (Nov 2006), 2399--2434.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. 2007. Analysis of representations for domain adaptation. In Advances in Neural Information Processing Systems. 137--144.Google ScholarGoogle Scholar
  6. Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35, 8 (2013), 1798--1828.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 120--128.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Konstantinos Bousmalis, George Trigeorgis, Nathan Silberman, Dilip Krishnan, and Dumitru Erhan. 2016. Domain separation networks. In Advances in Neural Information Processing Systems. 343--351.Google ScholarGoogle Scholar
  9. Deng Cai, Xiaofei He, Jiawei Han, and Thomas S Huang. 2011. Graph regularized nonnegative matrix factorization for data representation. IEEE Trans. Pattern Anal. Mach. Intell. 33, 8 (2011), 1548--1560.Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Yue Cao, Mingsheng Long, and Jianmin Wang. 2018. Unsupervised domain adaptation with distribution matching machines. In Proceedings of the 2018 AAAI International Conference on Artificial Intelligence (AAAI’18).Google ScholarGoogle Scholar
  11. Chao Chen, Zhihong Chen, Boyuan Jiang, and Xinyu Jin. 2019. Joint domain alignment and discriminative feature learning for unsupervised deep domain adaptation. In Proceedings of the 2018 AAAI International Conference on Artificial Intelligence (AAAI’19).Google ScholarGoogle ScholarCross RefCross Ref
  12. Minmin Chen, Zhixiang Xu, Kilian Weinberger, and Fei Sha. 2012. Marginalized denoising autoencoders for domain adaptation. In Proceedings of the International Conference on Machine Learning (ICML’12).Google ScholarGoogle Scholar
  13. Yiqiang Chen, Jindong Wang, Meiyu Huang, and Han Yu. 2019. Cross-position activity recognition with stratified transfer learning. Perv. Mob. Comput. 57, July (2019), 1--13.Google ScholarGoogle Scholar
  14. Yiqiang Chen, Jindong Wang, Chaohui Yu, Wen Gao, and Xin Qin. 2019. FedHealth: A federated transfer learning framework for wearable healthcare. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’19) Workshop on Federated Machine Learning.Google ScholarGoogle Scholar
  15. Wenyuan Dai, Gui-Rong Xue, Qiang Yang, and Yong Yu. 2007. Co-clustering based classification for out-of-domain documents. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 210--219.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Wenyuan Dai, Qiang Yang, Gui-Rong Xue, and Yong Yu. 2007. Boosting for transfer learning. In Proceedings of the 24th International Conference on Machine Learning (ICML’07). ACM, 193--200.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Oscar Day and Taghi M. Khoshgoftaar. 2017. A survey on heterogeneous transfer learning. Journal of Big Data 4, 1 (2017), 29.Google ScholarGoogle ScholarCross RefCross Ref
  18. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2009 (CVPR’09). IEEE, 248--255.Google ScholarGoogle ScholarCross RefCross Ref
  19. Chuong B. Do and Andrew Y. Ng. 2006. Transfer learning for text classification. In Advances in Neural Information Processing Systems. 299--306.Google ScholarGoogle Scholar
  20. Basura Fernando, Amaury Habrard, Marc Sebban, and Tinne Tuytelaars. 2013. Unsupervised visual domain adaptation using subspace alignment. In Proceedings of the IEEE International Conference on Computer Vision. 2960--2967.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Magda Friedjungová and Marcel Jirina. 2017. Asymmetric heterogeneous transfer learning: A survey. In Proceedings of the 6th International Conference on Data Science, Technology and Applications (DATA'17). 17–27.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In Proceedings of the International Conference on Machine Learning (ICML’15).Google ScholarGoogle Scholar
  23. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. J. Mach. Learn. Res. 17, 59 (2016), 1--35.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Muhammad Ghifary, David Balduzzi, W. Bastiaan Kleijn, and Mengjie Zhang. 2017. Scatter component analysis: A unified framework for domain adaptation and domain generalization. IEEE Trans. Pattern Anal. Mach. Intell. 39, 7 (2017), 1414--1430.Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. 2012. Geodesic flow kernel for unsupervised domain adaptation. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’12). IEEE, 2066--2073.Google ScholarGoogle ScholarCross RefCross Ref
  26. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Advances in Neural Information Processing Systems. 2672--2680.Google ScholarGoogle Scholar
  27. Raghuraman Gopalan, Ruonan Li, and Rama Chellappa. 2011. Domain adaptation for object recognition: An unsupervised approach. In Proceedings of the 2011 IEEE International Conference on Computer Vision (ICCV’11). IEEE, 999--1006.Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. 2012. A kernel two-sample test. J. Mach. Learn. Res. 13 (Mar. 2012), 723--773.Google ScholarGoogle Scholar
  29. Arthur Gretton, Dino Sejdinovic, Heiko Strathmann, Sivaraman Balakrishnan, Massimiliano Pontil, Kenji Fukumizu, and Bharath K. Sriperumbudur. 2012. Optimal kernel choice for large-scale two-sample tests. In Advances in Neural Information Processing Systems. 1205--1213.Google ScholarGoogle Scholar
  30. Jihun Hamm and Daniel D. Lee. 2008. Grassmann discriminant analysis: A unifying view on subspace-based learning. In Proceedings of the 25th International Conference on Machine Learning. ACM, 376--383.Google ScholarGoogle Scholar
  31. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 770--778.Google ScholarGoogle ScholarCross RefCross Ref
  32. Cheng-An Hou, Yao-Hung Hubert Tsai, Yi-Ren Yeh, and Yu-Chiang Frank Wang. 2016. Unsupervised domain adaptation with label and structural consistency. IEEE Trans. Image Process. 25, 12 (2016), 5552--5562.Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Ye Jia, Yu Zhang, Ron Weiss, Quan Wang, Jonathan Shen, Fei Ren, Patrick Nguyen, Ruoming Pang, Ignacio Lopez Moreno, Yonghui Wu, et al. 2018. Transfer learning from speaker verification to multispeaker text-to-speech synthesis. In Advances in Neural Information Processing Systems. 4480--4490.Google ScholarGoogle Scholar
  34. Bartosz Krawczyk. 2016. Learning from imbalanced data: Open challenges and future directions. Progr. Artif. Intell. 5, 4 (2016), 221--232.Google ScholarGoogle ScholarCross RefCross Ref
  35. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems. 1097--1105.Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In Proceedings of the European Conference on Computer Vision. Springer, 740--755.Google ScholarGoogle Scholar
  37. Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. 2015. Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning (ICML’15). 97--105.Google ScholarGoogle Scholar
  38. Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I. Jordan. 2018. Conditional adversarial domain adaptation. In Advances in Neural Information Processing Systems. 1645--1655.Google ScholarGoogle Scholar
  39. Mingsheng Long, Jianmin Wang, Guiguang Ding, Sinno Jialin Pan, and S Yu Philip. 2014. Adaptation regularization: A general framework for transfer learning. IEEE Trans. Knowl. Data Eng. 26, 5 (2014), 1076--1089.Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, and Philip S. Yu. 2013. Transfer feature learning with joint distribution adaptation. In Proceedings of the IEEE International Conference on Computer Vision. 2200--2207.Google ScholarGoogle Scholar
  41. Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I. Jordan. 2017. Deep transfer learning with joint adaptation networks. In Proceedings of the International Conference on Machine Learning. 2208--2217.Google ScholarGoogle Scholar
  42. Yong Luo, Tongliang Liu, Dacheng Tao, and Chao Xu. 2014. Decomposition-based transfer distance metric learning for image classification. IEEE Trans. Image Process. 23, 9 (2014), 3789--3801.Google ScholarGoogle ScholarCross RefCross Ref
  43. Yong Luo, Tongliang Liu, Yonggang Wen, and Dacheng Tao. 2018. Online heterogeneous transfer metric learning. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’18). 2525--2531.Google ScholarGoogle ScholarCross RefCross Ref
  44. Yong Luo, Yonggang Wen, Lingyu Duan, and Dacheng Tao. 2018. Transfer metric learning: Algorithms, applications and outlooks. arXiv preprint arXiv:1810.03944 (2018).Google ScholarGoogle Scholar
  45. Sinno Jialin Pan, Ivor W. Tsang, James T. Kwok, and Qiang Yang. 2011. Domain adaptation via transfer component analysis. IEEE Trans. Neur. Netw. 22, 2 (2011), 199--210.Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22, 10 (2010), 1345--1359.Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Pau Panareda Busto and Juergen Gall. 2017. Open set domain adaptation. In Proceedings of the IEEE International Conference on Computer Vision. 754--763.Google ScholarGoogle ScholarCross RefCross Ref
  48. Zhongyi Pei, Zhangjie Cao, Mingsheng Long, and Jianmin Wang. 2018. Multi-adversarial domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence.Google ScholarGoogle ScholarCross RefCross Ref
  49. Chuan-Xian Ren, Dao-Qing Dai, Ke-Kun Huang, and Zhao-Rong Lai. 2014. Transfer learning of structured representation for face recognition. IEEE Trans. Image Process. 23, 12 (2014), 5440--5454.Google ScholarGoogle ScholarCross RefCross Ref
  50. Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. 2010. Adapting visual category models to new domains. In Proceedings of the European Conference on Computer Vision. Springer, 213--226.Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Swami Sankaranarayanan, Yogesh Balaji, Carlos D. Castillo, and Rama Chellappa. 2018. Generate to adapt: Aligning domains using generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’18).Google ScholarGoogle ScholarCross RefCross Ref
  52. Baochen Sun, Jiashi Feng, and Kate Saenko. 2016. Return of frustratingly easy domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI’16), Vol. 6. 8.Google ScholarGoogle Scholar
  53. Baochen Sun and Kate Saenko. 2015. Subspace distribution alignment for unsupervised domain adaptation. In Proceedings of the British Machine Vision Conference (BMVC’15). 24--1.Google ScholarGoogle ScholarCross RefCross Ref
  54. Baochen Sun and Kate Saenko. 2016. Deep coral: Correlation alignment for deep domain adaptation. In Proceedings of the European Conference on Computer Vision. Springer, 443--450.Google ScholarGoogle ScholarCross RefCross Ref
  55. Jafar Tahmoresnezhad and Sattar Hashemi. 2017. Visual domain adaptation via transfer feature learning. Knowl. Inf. Syst. 50, 2 (2017), 586–605.Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Ben Tan, Yangqiu Song, Erheng Zhong, and Qiang Yang. 2015. Transitive transfer learning. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1155--1164.Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Eric Tzeng, Judy Hoffman, Kate Saenko, and Trevor Darrell. 2017. Adversarial discriminative domain adaptation. In Proceedings of the Computer Vision and Pattern Recognition (CVPR’17), Vol. 1. 4.Google ScholarGoogle ScholarCross RefCross Ref
  58. Eric Tzeng, Judy Hoffman, Ning Zhang, Kate Saenko, and Trevor Darrell. 2014. Deep domain confusion: Maximizing for domain invariance. arXiv preprint arXiv:1412.3474 (2014).Google ScholarGoogle Scholar
  59. Vladimir Naumovich Vapnik and Vlamimir Vapnik. 1998. Statistical Learning Theory. Vol. 1. Wiley, New York, NY.Google ScholarGoogle ScholarCross RefCross Ref
  60. Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan. 2017. Deep hashing network for unsupervised domain adaptation. In Proceedings of the Computer Vision and Pattern Recognition (CVPR’17). 5018--5027.Google ScholarGoogle ScholarCross RefCross Ref
  61. Jindong Wang et al. 2018. Everything about Transfer Learning and Domain Adapation. Retrieved from http://transferlearning.xyz.Google ScholarGoogle Scholar
  62. Jindong Wang, Yiqiang Chen, Shuji Hao, Wenjie Feng, and Zhiqi Shen. 2017. Balanced distribution adaptation for transfer learning. In Proceedings of the 2017 IEEE International Conference on Data Mining (ICDM’17). IEEE, 1129--1134.Google ScholarGoogle ScholarCross RefCross Ref
  63. Jindong Wang, Yiqiang Chen, Lisha Hu, Xiaohui Peng, and Philip S. Yu. 2018. Stratified transfer learning for cross-domain activity recognition. In Proceedings of the 2018 IEEE International Conference on Pervasive Computing and Communications (PerCom’18).Google ScholarGoogle Scholar
  64. Jindong Wang, Yiqiang Chen, Han Yu, Meiyu Huang, and Qiang Yang. 2019. Easy transfer learning by exploiting intra-domain structures. In Proceedings of the IEEE International Conference on Multimedia and Expo (ICME’19).Google ScholarGoogle ScholarCross RefCross Ref
  65. Jindong Wang, Wenjie Feng, Yiqiang Chen, Han Yu, Meiyu Huang, and Philip S Yu. 2018. Visual domain adaptation with manifold embedded distribution alignment. In Proceedings of the 2018 ACM Multimedia Conference on Multimedia Conference. ACM, 402--410.Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. Jindong Wang, Vincent W. Zheng, Yiqiang Chen, and Meiyu Huang. 2018. Deep transfer learning for cross-domain activity recognition. In Proceedings of the 3rd International Conference on Crowd Science and Engineering. ACM, 16.Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. Yong Xu, Xiaozhao Fang, Jian Wu, Xuelong Li, and David Zhang. 2016. Discriminative transfer subspace learning via low-rank and sparse representation. IEEE Trans. Image Process. 25, 2 (2016), 850--863.Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Yonghui Xu, Sinno Jialin Pan, Hui Xiong, Qingyao Wu, Ronghua Luo, Huaqing Min, and Hengjie Song. 2017. A unified framework for metric transfer learning. IEEE Trans. Knowl. Data Eng. 29, 6 (2017), 1158--1171.Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Jiangyan Yi, Jianhua Tao, Zhengqi Wen, and Ye Bai. 2019. Language-adversarial transfer learning for low-resource speech recognition. IEEE/ACM Trans. Aud. Speech Lang. Process. 27, 3 (2019), 621--630.Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? In Advances in Neural Information Processing Systems. 3320--3328.Google ScholarGoogle Scholar
  71. Werner Zellinger, Thomas Grubinger, Edwin Lughofer, Thomas Natschläger, and Susanne Saminger-Platz. 2017. Central moment discrepancy (cmd) for domain-invariant representation learning. In Proceedings of the International Conference on Learning Representations (ICLR’71).Google ScholarGoogle Scholar
  72. Jing Zhang, Zewei Ding, Wanqing Li, and Philip Ogunbona. 2018. Importance weighted adversarial nets for partial domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8156--8164.Google ScholarGoogle ScholarCross RefCross Ref
  73. Jing Zhang, Wanqing Li, and Philip Ogunbona. 2017. Cross-dataset recognition: A survey.arXiv preprint arXiv:1705.04396 (2017).Google ScholarGoogle Scholar
  74. Jing Zhang, Wanqing Li, and Philip Ogunbona. 2017. Joint geometrical and statistical alignment for visual domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR’17).Google ScholarGoogle ScholarCross RefCross Ref
  75. Weichen Zhang, Wanli Ouyang, Wen Li, and Dong Xu. 2018. Collaborative and adversarial network for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3801--3809.Google ScholarGoogle ScholarCross RefCross Ref
  76. Yu Zhang and Qiang Yang. 2017. A survey on multi-task learning. arXiv preprint arXiv:1707.08114 (2017).Google ScholarGoogle Scholar
  77. Erheng Zhong, Wei Fan, Jing Peng, Kun Zhang, Jiangtao Ren, Deepak Turaga, and Olivier Verscheure. 2009. Cross domain distribution adaptation via kernel mapping. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1027--1036.Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. Yongchun Zhu, Fuzhen Zhuang, Jindong Wang, Jingwu Chen, Zhiping Shi, Wenjuan Wu, and Qing He. 2019. Multi-representation adaptation network for cross-domain image classification. Neur. Netw. 119 (2019), 214–221.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Transfer Learning with Dynamic Distribution Adaptation

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Intelligent Systems and Technology
          ACM Transactions on Intelligent Systems and Technology  Volume 11, Issue 1
          February 2020
          304 pages
          ISSN:2157-6904
          EISSN:2157-6912
          DOI:10.1145/3375625
          Issue’s Table of Contents

          Copyright © 2020 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 6 February 2020
          • Accepted: 1 September 2019
          • Revised: 1 July 2019
          • Received: 1 March 2019
          Published in tist Volume 11, Issue 1

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format