Skip to main content
Top

2020 | OriginalPaper | Chapter

Explainable Recommendations in Intelligent Systems: Delivery Methods, Modalities and Risks

Authors : Mohammad Naiseh, Nan Jiang, Jianbing Ma, Raian Ali

Published in: Research Challenges in Information Science

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

With the increase in data volume, velocity and types, intelligent human-agent systems have become popular and adopted in different application domains, including critical and sensitive areas such as health and security. Humans’ trust, their consent and receptiveness to recommendations are the main requirement for the success of such services. Recently, the demand on explaining the recommendations to humans has increased both from humans interacting with these systems so that they make an informed decision and, also, owners and systems managers to increase transparency and consequently trust and users’ retention. Existing systematic reviews in the area of explainable recommendations focused on the goal of providing explanations, their presentation and informational content. In this paper, we review the literature with a focus on two user experience facets of explanations; delivery methods and modalities. We then focus on the risks of explanation both on user experience and their decision making. Our review revealed that explanations delivery to end-users is mostly designed to be along with the recommendation in a push and pull styles while archiving explanations for later accountability and traceability is still limited. We also found that the emphasis was mainly on the benefits of recommendations while risks and potential concerns, such as over-reliance on machines, is still a new area to explore.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Al-Taie, M.Z., Kadry, S.: Visualization of explanations in recommender systems. J. Adv. Manag. Sci. 2(2), 140–144 (2014)CrossRef Al-Taie, M.Z., Kadry, S.: Visualization of explanations in recommender systems. J. Adv. Manag. Sci. 2(2), 140–144 (2014)CrossRef
2.
go back to reference Andreou, A., Venkatadri, G., Goga, O., Gummadi, K., Loiseau, P., Mislove, A.: Investigating ad transparency mechanisms in social media: a case study of Facebook’s explanations (2018) Andreou, A., Venkatadri, G., Goga, O., Gummadi, K., Loiseau, P., Mislove, A.: Investigating ad transparency mechanisms in social media: a case study of Facebook’s explanations (2018)
3.
go back to reference Arioua, A., Buche, P., Croitoru, M.: Explanatory dialogues with argumentative faculties over inconsistent knowledge bases. Expert Syst. Appl. 80, 244–262 (2017)CrossRef Arioua, A., Buche, P., Croitoru, M.: Explanatory dialogues with argumentative faculties over inconsistent knowledge bases. Expert Syst. Appl. 80, 244–262 (2017)CrossRef
5.
go back to reference Barria-Pineda, J., Akhuseyinoglu, K., Brusilovsky, P.: Explaining need-based educational recommendations using interactive open learner models. In: Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization, pp. 273–277. ACM (2019) Barria-Pineda, J., Akhuseyinoglu, K., Brusilovsky, P.: Explaining need-based educational recommendations using interactive open learner models. In: Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization, pp. 273–277. ACM (2019)
6.
go back to reference Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., Shadbolt, N.: ‘It’s reducing a human being to a percentage’: perceptions of justice in algorithmic decisions. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 377. ACM (2018) Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., Shadbolt, N.: ‘It’s reducing a human being to a percentage’: perceptions of justice in algorithmic decisions. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 377. ACM (2018)
7.
go back to reference Biran, O., McKeown, K.R.: Human-centric justification of machine learning predictions. In: IJCAI, pp. 1461–1467 (2017) Biran, O., McKeown, K.R.: Human-centric justification of machine learning predictions. In: IJCAI, pp. 1461–1467 (2017)
8.
go back to reference Blake, J.N., Kerr, D.V., Gammack, J.G.: Streamlining patient consultations for sleep disorders with a knowledge-based cdss. Inf. Syst. 56, 109–119 (2016)CrossRef Blake, J.N., Kerr, D.V., Gammack, J.G.: Streamlining patient consultations for sleep disorders with a knowledge-based cdss. Inf. Syst. 56, 109–119 (2016)CrossRef
9.
go back to reference Bostandjiev, S., O’Donovan, J., Höllerer, T.: TasteWeights: a visual interactive hybrid recommender system. In: Proceedings of the sixth ACM Conference on Recommender systems, pp. 35–42. ACM (2012) Bostandjiev, S., O’Donovan, J., Höllerer, T.: TasteWeights: a visual interactive hybrid recommender system. In: Proceedings of the sixth ACM Conference on Recommender systems, pp. 35–42. ACM (2012)
10.
go back to reference Brooks, M., Amershi, S., Lee, B., Drucker, S.M., Kapoor, A., Simard, P.: FeatureInsight: visual support for error-driven feature ideation in text classification. In: 2015 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 105–112. IEEE (2015) Brooks, M., Amershi, S., Lee, B., Drucker, S.M., Kapoor, A., Simard, P.: FeatureInsight: visual support for error-driven feature ideation in text classification. In: 2015 IEEE Conference on Visual Analytics Science and Technology (VAST), pp. 105–112. IEEE (2015)
11.
go back to reference Bunt, A., Lount, M., Lauzon, C.: Are explanations always important?: A study of deployed, low-cost intelligent interactive systems. In: Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces, pp. 169–178. ACM (2012) Bunt, A., Lount, M., Lauzon, C.: Are explanations always important?: A study of deployed, low-cost intelligent interactive systems. In: Proceedings of the 2012 ACM International Conference on Intelligent User Interfaces, pp. 169–178. ACM (2012)
12.
go back to reference Bussone, A., Stumpf, S., O’Sullivan, D.: The role of explanations on trust and reliance in clinical decision support systems. In: 2015 International Conference on Healthcare Informatics, pp. 160–169. IEEE (2015) Bussone, A., Stumpf, S., O’Sullivan, D.: The role of explanations on trust and reliance in clinical decision support systems. In: 2015 International Conference on Healthcare Informatics, pp. 160–169. IEEE (2015)
13.
go back to reference Cai, C.J., Jongejan, J., Holbrook, J.: The effects of example-based explanations in a machine learning interface. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 258–262. ACM (2019) Cai, C.J., Jongejan, J., Holbrook, J.: The effects of example-based explanations in a machine learning interface. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 258–262. ACM (2019)
14.
go back to reference Chromik, M., Eiband, M., Völkel, S.T., Buschek, D.: Dark patterns of explainability, transparency, and user control for intelligent systems. In: IUI Workshops (2019) Chromik, M., Eiband, M., Völkel, S.T., Buschek, D.: Dark patterns of explainability, transparency, and user control for intelligent systems. In: IUI Workshops (2019)
15.
go back to reference Coba, L., Zanker, M., Rook, L., Symeonidis, P.: Exploring users’ perception of collaborative explanation styles. In: 2018 IEEE 20th Conference on Business Informatics (CBI), vol. 1, pp. 70–78. IEEE (2018) Coba, L., Zanker, M., Rook, L., Symeonidis, P.: Exploring users’ perception of collaborative explanation styles. In: 2018 IEEE 20th Conference on Business Informatics (CBI), vol. 1, pp. 70–78. IEEE (2018)
16.
go back to reference Díaz-Agudo, B., Recio-Garcia, J.A., Jimenez-Díaz, G.: Data explanation with CBR. In: ICCBR 2018, p. 64 (2018) Díaz-Agudo, B., Recio-Garcia, J.A., Jimenez-Díaz, G.: Data explanation with CBR. In: ICCBR 2018, p. 64 (2018)
17.
go back to reference Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K., Dugan, C.: Explaining models: an empirical study of how explanations impact fairness judgment. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 275–285. ACM (2019) Dodge, J., Liao, Q.V., Zhang, Y., Bellamy, R.K., Dugan, C.: Explaining models: an empirical study of how explanations impact fairness judgment. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 275–285. ACM (2019)
18.
go back to reference Dominguez, V., Messina, P., Donoso-Guzmán, I., Parra, D.: The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 408–416. ACM (2019) Dominguez, V., Messina, P., Donoso-Guzmán, I., Parra, D.: The effect of explanations and algorithmic accuracy on visual recommender systems of artistic images. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 408–416. ACM (2019)
19.
go back to reference Du Toit, E.: Constructive feedback as a learning tool to enhance students’ self-regulation and performance in higher education. Perspect. Educ. 30(2), 32–40 (2012) Du Toit, E.: Constructive feedback as a learning tool to enhance students’ self-regulation and performance in higher education. Perspect. Educ. 30(2), 32–40 (2012)
20.
go back to reference Ehrlich, K., Kirk, S.E., Patterson, J., Rasmussen, J.C., Ross, S.I., Gruen, D.M.: Taking advice from intelligent systems: the double-edged sword of explanations. In: Proceedings of the 16th International Conference on Intelligent User Interfaces, pp. 125–134. ACM (2011) Ehrlich, K., Kirk, S.E., Patterson, J., Rasmussen, J.C., Ross, S.I., Gruen, D.M.: Taking advice from intelligent systems: the double-edged sword of explanations. In: Proceedings of the 16th International Conference on Intelligent User Interfaces, pp. 125–134. ACM (2011)
21.
go back to reference Eiband, M., Buschek, D., Kremer, A., Hussmann, H.: The impact of placebic explanations on trust in intelligent systems. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, p. LBW0243. ACM (2019) Eiband, M., Buschek, D., Kremer, A., Hussmann, H.: The impact of placebic explanations on trust in intelligent systems. In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, p. LBW0243. ACM (2019)
22.
go back to reference Eiband, M., Schneider, H., Buschek, D.: Normative vs. pragmatic: two perspectives on the design of explanations in intelligent systems. In: IUI Workshops (2018) Eiband, M., Schneider, H., Buschek, D.: Normative vs. pragmatic: two perspectives on the design of explanations in intelligent systems. In: IUI Workshops (2018)
23.
go back to reference Eiband, M., Völkel, S.T., Buschek, D., Cook, S., Hussmann, H.: When people and algorithms meet: user-reported problems in intelligent everyday applications. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 96–106. ACM (2019) Eiband, M., Völkel, S.T., Buschek, D., Cook, S., Hussmann, H.: When people and algorithms meet: user-reported problems in intelligent everyday applications. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 96–106. ACM (2019)
24.
go back to reference Elahi, M., Ge, M., Ricci, F., Fernández-Tobías, I., Berkovsky, S., David, M.: Interaction design in a mobile food recommender system. In: CEUR Workshop Proceedings, CEUR-WS (2015) Elahi, M., Ge, M., Ricci, F., Fernández-Tobías, I., Berkovsky, S., David, M.: Interaction design in a mobile food recommender system. In: CEUR Workshop Proceedings, CEUR-WS (2015)
25.
go back to reference Eslami, M., Krishna Kumaran, S.R., Sandvig, C., Karahalios, K.: Communicating algorithmic process in online behavioral advertising. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 432. ACM (2018) Eslami, M., Krishna Kumaran, S.R., Sandvig, C., Karahalios, K.: Communicating algorithmic process in online behavioral advertising. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 432. ACM (2018)
26.
go back to reference Galindo, J.A., Dupuy-Chessa, S., Mandran, N., Céret, E.: Using user emotions to trigger UI adaptation. In: 2018 12th International Conference on Research Challenges in Information Science (RCIS), pp. 1–11. IEEE (2018) Galindo, J.A., Dupuy-Chessa, S., Mandran, N., Céret, E.: Using user emotions to trigger UI adaptation. In: 2018 12th International Conference on Research Challenges in Information Science (RCIS), pp. 1–11. IEEE (2018)
27.
go back to reference Gedikli, F., Jannach, D., Ge, M.: How should I explain? A comparison of different explanation types for recommender systems. Int. J. Hum Comput. Stud. 72(4), 367–382 (2014)CrossRef Gedikli, F., Jannach, D., Ge, M.: How should I explain? A comparison of different explanation types for recommender systems. Int. J. Hum Comput. Stud. 72(4), 367–382 (2014)CrossRef
28.
go back to reference Goodman, B., Flaxman, S.: Eu regulations on algorithmic decision-making and a ‘right to explanation’. In: ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York (2016) Goodman, B., Flaxman, S.: Eu regulations on algorithmic decision-making and a ‘right to explanation’. In: ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York (2016)
29.
go back to reference Gretarsson, B., O’Donovan, J., Bostandjiev, S., Hall, C., Höllerer, T.: SmallWorlds: visualizing social recommendations. In: Computer Graphics Forum, vol. 29, pp. 833–842. Wiley Online Library (2010) Gretarsson, B., O’Donovan, J., Bostandjiev, S., Hall, C., Höllerer, T.: SmallWorlds: visualizing social recommendations. In: Computer Graphics Forum, vol. 29, pp. 833–842. Wiley Online Library (2010)
30.
go back to reference Gutiérrez, F., Charleer, S., De Croon, R., Htun, N.N., Goetschalckx, G., Verbert, K.: Explaining and exploring job recommendations: a user-driven approach for interacting with knowledge-based job recommender systems. In: Proceedings of the 13th ACM Conference on Recommender Systems, pp. 60–68 (2019) Gutiérrez, F., Charleer, S., De Croon, R., Htun, N.N., Goetschalckx, G., Verbert, K.: Explaining and exploring job recommendations: a user-driven approach for interacting with knowledge-based job recommender systems. In: Proceedings of the 13th ACM Conference on Recommender Systems, pp. 60–68 (2019)
31.
go back to reference Hagras, H.: Toward human-understandable, explainable AI. Computer 51(9), 28–36 (2018)CrossRef Hagras, H.: Toward human-understandable, explainable AI. Computer 51(9), 28–36 (2018)CrossRef
32.
go back to reference ter Hoeve, M., Heruer, M., Odijk, D., Schuth, A., de Rijke, M.: Do news consumers want explanations for personalized news rankings. In: FATREC Workshop on Responsible Recommendation Proceedings (2017) ter Hoeve, M., Heruer, M., Odijk, D., Schuth, A., de Rijke, M.: Do news consumers want explanations for personalized news rankings. In: FATREC Workshop on Responsible Recommendation Proceedings (2017)
33.
go back to reference Holliday, D., Wilson, S., Stumpf, S.: The effect of explanations on perceived control and behaviors in intelligent systems. In: CHI 2013 Extended Abstracts on Human Factors in Computing Systems, pp. 181–186. ACM (2013) Holliday, D., Wilson, S., Stumpf, S.: The effect of explanations on perceived control and behaviors in intelligent systems. In: CHI 2013 Extended Abstracts on Human Factors in Computing Systems, pp. 181–186. ACM (2013)
34.
go back to reference Hosseini, M., Shahri, A., Phalp, K., Taylor, J., Ali, R.: Crowdsourcing: a taxonomy and systematic mapping study. Comput. Sci. Rev. 17, 43–69 (2015)MathSciNetCrossRef Hosseini, M., Shahri, A., Phalp, K., Taylor, J., Ali, R.: Crowdsourcing: a taxonomy and systematic mapping study. Comput. Sci. Rev. 17, 43–69 (2015)MathSciNetCrossRef
35.
go back to reference Hu, J., Zhang, Z., Liu, J., Shi, C., Yu, P.S., Wang, B.: RecExp: a semantic recommender system with explanation based on heterogeneous information network. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp. 401–402. ACM (2016) Hu, J., Zhang, Z., Liu, J., Shi, C., Yu, P.S., Wang, B.: RecExp: a semantic recommender system with explanation based on heterogeneous information network. In: Proceedings of the 10th ACM Conference on Recommender Systems, pp. 401–402. ACM (2016)
36.
go back to reference Huang, S.H., Bhatia, K., Abbeel, P., Dragan, A.D.: Establishing appropriate trust via critical states. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3929–3936. IEEE (2018) Huang, S.H., Bhatia, K., Abbeel, P., Dragan, A.D.: Establishing appropriate trust via critical states. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3929–3936. IEEE (2018)
37.
go back to reference Hussein, T., Neuhaus, S.: Explanation of spreading activation based recommendations. In: Proceedings of the 1st International Workshop on Semantic Models for Adaptive Interactive Systems, SEMAIS, vol. 10, pp. 24–28. Citeseer (2010) Hussein, T., Neuhaus, S.: Explanation of spreading activation based recommendations. In: Proceedings of the 1st International Workshop on Semantic Models for Adaptive Interactive Systems, SEMAIS, vol. 10, pp. 24–28. Citeseer (2010)
39.
go back to reference Karga, S., Satratzemi, M.: Using explanations for recommender systems in learning design settings to enhance teachers’ acceptance and perceived experience. Educ. Inf. Technol. 24, 1–22 (2019)CrossRef Karga, S., Satratzemi, M.: Using explanations for recommender systems in learning design settings to enhance teachers’ acceptance and perceived experience. Educ. Inf. Technol. 24, 1–22 (2019)CrossRef
40.
go back to reference Katarya, R., Jain, I., Hasija, H.: An interactive interface for instilling trust and providing diverse recommendations. In: 2014 International Conference on Computer and Communication Technology (ICCCT), pp. 17–22. IEEE (2014) Katarya, R., Jain, I., Hasija, H.: An interactive interface for instilling trust and providing diverse recommendations. In: 2014 International Conference on Computer and Communication Technology (ICCCT), pp. 17–22. IEEE (2014)
41.
go back to reference Kleinerman, A., Rosenfeld, A., Kraus, S.: Providing explanations for recommendations in reciprocal environments. In: Proceedings of the 12th ACM Conference on Recommender Systems, pp. 22–30. ACM (2018) Kleinerman, A., Rosenfeld, A., Kraus, S.: Providing explanations for recommendations in reciprocal environments. In: Proceedings of the 12th ACM Conference on Recommender Systems, pp. 22–30. ACM (2018)
42.
go back to reference Knijnenburg, B.P., Kobsa, A.: Making decisions about privacy: information disclosure in context-aware recommender systems. ACM Trans. Interact. Intell. Syst. (TiiS) 3(3), 20 (2013) Knijnenburg, B.P., Kobsa, A.: Making decisions about privacy: information disclosure in context-aware recommender systems. ACM Trans. Interact. Intell. Syst. (TiiS) 3(3), 20 (2013)
43.
go back to reference Krause, J., Perer, A., Bertini, E.: A user study on the effect of aggregating explanations for interpreting machine learning models. In: ACM KDD Workshop on Interactive Data Exploration and Analytics (2018) Krause, J., Perer, A., Bertini, E.: A user study on the effect of aggregating explanations for interpreting machine learning models. In: ACM KDD Workshop on Interactive Data Exploration and Analytics (2018)
44.
go back to reference Kroll, J.A., Barocas, S., Felten, E.W., Reidenberg, J.R., Robinson, D.G., Yu, H.: Accountable algorithms. U. Pa. L. Rev. 165, 633 (2016) Kroll, J.A., Barocas, S., Felten, E.W., Reidenberg, J.R., Robinson, D.G., Yu, H.: Accountable algorithms. U. Pa. L. Rev. 165, 633 (2016)
45.
go back to reference Kulesza, T., Burnett, M., Wong, W.K., Stumpf, S.: Principles of explanatory debugging to personalize interactive machine learning. In: Proceedings of the 20th International Conference on Intelligent User Interfaces, pp. 126–137. ACM (2015) Kulesza, T., Burnett, M., Wong, W.K., Stumpf, S.: Principles of explanatory debugging to personalize interactive machine learning. In: Proceedings of the 20th International Conference on Intelligent User Interfaces, pp. 126–137. ACM (2015)
46.
go back to reference Kulesza, T., Stumpf, S., Burnett, M., Kwan, I.: Tell me more?: The effects of mental model soundness on personalizing an intelligent agent. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1–10. ACM (2012) Kulesza, T., Stumpf, S., Burnett, M., Kwan, I.: Tell me more?: The effects of mental model soundness on personalizing an intelligent agent. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1–10. ACM (2012)
47.
go back to reference Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., Wong, W.K.: Too much, too little, or just right? Ways explanations impact end users’ mental models. In: 2013 IEEE Symposium on Visual Languages and Human Centric Computing, pp. 3–10. IEEE (2013) Kulesza, T., Stumpf, S., Burnett, M., Yang, S., Kwan, I., Wong, W.K.: Too much, too little, or just right? Ways explanations impact end users’ mental models. In: 2013 IEEE Symposium on Visual Languages and Human Centric Computing, pp. 3–10. IEEE (2013)
48.
go back to reference Lai, V., Tan, C.: On human predictions with explanations and predictions of machine learning models: a case study on deception detection, pp. 29–38 (2019) Lai, V., Tan, C.: On human predictions with explanations and predictions of machine learning models: a case study on deception detection, pp. 29–38 (2019)
49.
go back to reference Lamche, B., Adıgüzel, U., Wörndl, W.: Interactive explanations in mobile shopping recommender systems. In: Joint Workshop on Interfaces and Human Decision Making in Recommender Systems, p. 14 (2014) Lamche, B., Adıgüzel, U., Wörndl, W.: Interactive explanations in mobile shopping recommender systems. In: Joint Workshop on Interfaces and Human Decision Making in Recommender Systems, p. 14 (2014)
50.
go back to reference Langley, P., Meadows, B., Sridharan, M., Choi, D.: Explainable agency for intelligent autonomous systems. In: Twenty-Ninth IAAI Conference (2017) Langley, P., Meadows, B., Sridharan, M., Choi, D.: Explainable agency for intelligent autonomous systems. In: Twenty-Ninth IAAI Conference (2017)
51.
go back to reference Le Bras, P., Robb, D.A., Methven, T.S., Padilla, S., Chantler, M.J.: Improving user confidence in concept maps: exploring data driven explanations. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 404. ACM (2018) Le Bras, P., Robb, D.A., Methven, T.S., Padilla, S., Chantler, M.J.: Improving user confidence in concept maps: exploring data driven explanations. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 404. ACM (2018)
52.
go back to reference Leon, P.G., Cranshaw, J., Cranor, L.F., Graves, J., Hastak, M., Xu, G.: What do online behavioral advertising disclosures communicate to users? (cmu-cylab-12-008) (2012) Leon, P.G., Cranshaw, J., Cranor, L.F., Graves, J., Hastak, M., Xu, G.: What do online behavioral advertising disclosures communicate to users? (cmu-cylab-12-008) (2012)
53.
go back to reference Lepri, B., Oliver, N., Letouzé, E., Pentland, A., Vinck, P.: Fair, transparent, and accountable algorithmic decision-making processes. Philos. Technol. 31(4), 611–627 (2018)CrossRef Lepri, B., Oliver, N., Letouzé, E., Pentland, A., Vinck, P.: Fair, transparent, and accountable algorithmic decision-making processes. Philos. Technol. 31(4), 611–627 (2018)CrossRef
54.
go back to reference Li, T., Convertino, G., Tayi, R.K., Kazerooni, S.: What data should I protect?: Recommender and planning support for data security analysts. In: IUI, pp. 286–297 (2019) Li, T., Convertino, G., Tayi, R.K., Kazerooni, S.: What data should I protect?: Recommender and planning support for data security analysts. In: IUI, pp. 286–297 (2019)
55.
go back to reference Lim, B.Y., Dey, A.K.: Assessing demand for intelligibility in context-aware applications. In: Proceedings of the 11th International Conference on Ubiquitous Computing, pp. 195–204. ACM (2009) Lim, B.Y., Dey, A.K.: Assessing demand for intelligibility in context-aware applications. In: Proceedings of the 11th International Conference on Ubiquitous Computing, pp. 195–204. ACM (2009)
56.
go back to reference Loepp, B., Herrmanny, K., Ziegler, J.: Blended recommending: integrating interactive information filtering and algorithmic recommender techniques. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 975–984. ACM (2015) Loepp, B., Herrmanny, K., Ziegler, J.: Blended recommending: integrating interactive information filtering and algorithmic recommender techniques. In: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 975–984. ACM (2015)
57.
go back to reference Millecamp, M., Htun, N.N., Conati, C., Verbert, K.: To explain or not to explain: the effects of personal characteristics when explaining music recommendations. In: IUI, pp. 397–407 (2019) Millecamp, M., Htun, N.N., Conati, C., Verbert, K.: To explain or not to explain: the effects of personal characteristics when explaining music recommendations. In: IUI, pp. 397–407 (2019)
58.
go back to reference Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2018)MathSciNetCrossRef Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 267, 1–38 (2018)MathSciNetCrossRef
59.
go back to reference Moher, D., Liberati, A., Tetzlaff, J., Altman, D.G.: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann. Int. Med. 151(4), 264–269 (2009)CrossRef Moher, D., Liberati, A., Tetzlaff, J., Altman, D.G.: Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann. Int. Med. 151(4), 264–269 (2009)CrossRef
61.
go back to reference Naiseh, M., Jiang, N., Ma, J., Ali, R.: Personalising explainable recommendations: literature and conceptualisation. In: WorldCist 2020 - 8th World Conference on Information Systems and Technologies. Springer, Heidelberg (2020) Naiseh, M., Jiang, N., Ma, J., Ali, R.: Personalising explainable recommendations: literature and conceptualisation. In: WorldCist 2020 - 8th World Conference on Information Systems and Technologies. Springer, Heidelberg (2020)
62.
go back to reference Narayanan, M., Chen, E., He, J., Kim, B., Gershman, S., Doshi-Velez, F.: How do humans understand explanations from machine learning systems? An evaluation of the human-interpretability of explanation (2018) Narayanan, M., Chen, E., He, J., Kim, B., Gershman, S., Doshi-Velez, F.: How do humans understand explanations from machine learning systems? An evaluation of the human-interpretability of explanation (2018)
64.
go back to reference Nunes, I., Jannach, D.: A systematic review and taxonomy of explanations in decision support and recommender systems. User Model. User-Adap. Inter. 27(3–5), 393–444 (2017)CrossRef Nunes, I., Jannach, D.: A systematic review and taxonomy of explanations in decision support and recommender systems. User Model. User-Adap. Inter. 27(3–5), 393–444 (2017)CrossRef
65.
go back to reference Paraschakis, D.: Towards an ethical recommendation framework. In: 2017 11th International Conference on Research Challenges in Information Science (RCIS), pp. 211–220. IEEE (2017) Paraschakis, D.: Towards an ethical recommendation framework. In: 2017 11th International Conference on Research Challenges in Information Science (RCIS), pp. 211–220. IEEE (2017)
66.
go back to reference Parra, D., Brusilovsky, P., Trattner, C.: See what you want to see: visual user-driven approach for hybrid recommendation. In: Proceedings of the 19th International Conference on Intelligent User Interfaces, pp. 235–240. ACM (2014) Parra, D., Brusilovsky, P., Trattner, C.: See what you want to see: visual user-driven approach for hybrid recommendation. In: Proceedings of the 19th International Conference on Intelligent User Interfaces, pp. 235–240. ACM (2014)
67.
go back to reference Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.: Manipulating and measuring model interpretability (2018) Poursabzi-Sangdeh, F., Goldstein, D.G., Hofman, J.M., Vaughan, J.W., Wallach, H.: Manipulating and measuring model interpretability (2018)
68.
go back to reference Ramachandran, D., et al.: A TV program discovery dialog system using recommendations. In: Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 435–437 (2015) Ramachandran, D., et al.: A TV program discovery dialog system using recommendations. In: Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pp. 435–437 (2015)
69.
go back to reference Rosenfeld, A., Richardson, A.: Explainability in human-agent systems. Auton. Agent. Multi-Agent Syst. 33(6), 673–705 (2019) CrossRef Rosenfeld, A., Richardson, A.: Explainability in human-agent systems. Auton. Agent. Multi-Agent Syst. 33(6), 673–705 (2019) CrossRef
70.
go back to reference Ruiz-Iniesta, A., Melgar, L., Baldominos, A., Quintana, D.: Improving childrens’ experience on a mobile EdTech platform through a recommender system. Mob. Inf. Syst. 2018 (2018) Ruiz-Iniesta, A., Melgar, L., Baldominos, A., Quintana, D.: Improving childrens’ experience on a mobile EdTech platform through a recommender system. Mob. Inf. Syst. 2018 (2018)
71.
go back to reference Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models (2017) Samek, W., Wiegand, T., Müller, K.R.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models (2017)
72.
go back to reference Sato, M., Ahsan, B., Nagatani, K., Sonoda, T., Zhang, Q., Ohkuma, T.: Explaining recommendations using contexts. In: 23rd International Conference on Intelligent User Interfaces, pp. 659–664. ACM (2018) Sato, M., Ahsan, B., Nagatani, K., Sonoda, T., Zhang, Q., Ohkuma, T.: Explaining recommendations using contexts. In: 23rd International Conference on Intelligent User Interfaces, pp. 659–664. ACM (2018)
73.
go back to reference Schäfer, H., et al.: Towards health (aware) recommender systems. In: Proceedings of the 2017 International Conference on Digital Health, pp. 157–161. ACM (2017) Schäfer, H., et al.: Towards health (aware) recommender systems. In: Proceedings of the 2017 International Conference on Digital Health, pp. 157–161. ACM (2017)
74.
go back to reference Schaffer, J., Giridhar, P., Jones, D., Höllerer, T., Abdelzaher, T., O’donovan, J.: Getting the message?: A study of explanation interfaces for microblog data analysis. In: Proceedings of the 20th International Conference on Intelligent User Interfaces, pp. 345–356. ACM (2015) Schaffer, J., Giridhar, P., Jones, D., Höllerer, T., Abdelzaher, T., O’donovan, J.: Getting the message?: A study of explanation interfaces for microblog data analysis. In: Proceedings of the 20th International Conference on Intelligent User Interfaces, pp. 345–356. ACM (2015)
75.
go back to reference Schaffer, J., O’Donovan, J., Michaelis, J., Raglin, A., Höllerer, T.: I can do better than your AI: expertise and explanations. In: IUI, pp. 240–251 (2019) Schaffer, J., O’Donovan, J., Michaelis, J., Raglin, A., Höllerer, T.: I can do better than your AI: expertise and explanations. In: IUI, pp. 240–251 (2019)
76.
go back to reference Springer, A., Whittaker, S.: Progressive disclosure: empirically motivated approaches to designing effective transparency, pp. 107–120 (2019) Springer, A., Whittaker, S.: Progressive disclosure: empirically motivated approaches to designing effective transparency, pp. 107–120 (2019)
77.
go back to reference Stumpf, S., et al.: Interacting meaningfully with machine learning systems: three experiments. Int. J. Hum. Comput. Stud. 67(8), 639–662 (2009)CrossRef Stumpf, S., et al.: Interacting meaningfully with machine learning systems: three experiments. Int. J. Hum. Comput. Stud. 67(8), 639–662 (2009)CrossRef
78.
go back to reference Stumpf, S., Skrebe, S., Aymer, G., Hobson, J.: Explaining smart heating systems to discourage fiddling with optimized behavior. In: CEUR Workshop Proceedings, vol. 2068 (2018) Stumpf, S., Skrebe, S., Aymer, G., Hobson, J.: Explaining smart heating systems to discourage fiddling with optimized behavior. In: CEUR Workshop Proceedings, vol. 2068 (2018)
79.
go back to reference Svrcek, M., Kompan, M., Bielikova, M.: Towards understandable personalized recommendations: hybrid explanations. Comput. Sci. Inf. Syst. 16(1), 179–203 (2019)CrossRef Svrcek, M., Kompan, M., Bielikova, M.: Towards understandable personalized recommendations: hybrid explanations. Comput. Sci. Inf. Syst. 16(1), 179–203 (2019)CrossRef
80.
go back to reference Tamagnini, P., Krause, J., Dasgupta, A., Bertini, E.: Interpreting black-box classifiers using instance-level visual explanations. In: Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics, p. 6. ACM (2017) Tamagnini, P., Krause, J., Dasgupta, A., Bertini, E.: Interpreting black-box classifiers using instance-level visual explanations. In: Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics, p. 6. ACM (2017)
81.
go back to reference Tsai, C.H., Brusilovsky, P.: Providing control and transparency in a social recommender system for academic conferences. In: Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization, pp. 313–317. ACM (2017) Tsai, C.H., Brusilovsky, P.: Providing control and transparency in a social recommender system for academic conferences. In: Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization, pp. 313–317. ACM (2017)
82.
go back to reference Tsai, C.H., Brusilovsky, P.: Explaining recommendations in an interactive hybrid social recommender. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 391–396. ACM (2019) Tsai, C.H., Brusilovsky, P.: Explaining recommendations in an interactive hybrid social recommender. In: Proceedings of the 24th International Conference on Intelligent User Interfaces, pp. 391–396. ACM (2019)
83.
go back to reference Verbert, K., Parra, D., Brusilovsky, P., Duval, E.: Visualizing recommendations to support exploration, transparency and controllability. In: Proceedings of the 2013 International Conference on Intelligent User Interfaces, pp. 351–362. ACM (2013) Verbert, K., Parra, D., Brusilovsky, P., Duval, E.: Visualizing recommendations to support exploration, transparency and controllability. In: Proceedings of the 2013 International Conference on Intelligent User Interfaces, pp. 351–362. ACM (2013)
84.
go back to reference Wiebe, M., Geiskkovitch, D.Y., Bunt, A.: Exploring user attitudes towards different approaches to command recommendation in feature-rich software. In: Proceedings of the 21st International Conference on Intelligent User Interfaces, pp. 43–47. ACM (2016) Wiebe, M., Geiskkovitch, D.Y., Bunt, A.: Exploring user attitudes towards different approaches to command recommendation in feature-rich software. In: Proceedings of the 21st International Conference on Intelligent User Interfaces, pp. 43–47. ACM (2016)
85.
go back to reference Zanker, M., Ninaus, D.: Knowledgeable explanations for recommender systems. In: 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, vol. 1, pp. 657–660. IEEE (2010) Zanker, M., Ninaus, D.: Knowledgeable explanations for recommender systems. In: 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, vol. 1, pp. 657–660. IEEE (2010)
86.
go back to reference Zanker, M., Schoberegger, M.: An empirical study on the persuasiveness of fact-based explanations for recommender systems. In: Joint Workshop on Interfaces and Human Decision Making in Recommender Systems, vol. 1253, pp. 33–36 (2014) Zanker, M., Schoberegger, M.: An empirical study on the persuasiveness of fact-based explanations for recommender systems. In: Joint Workshop on Interfaces and Human Decision Making in Recommender Systems, vol. 1253, pp. 33–36 (2014)
87.
go back to reference Zhao, G., et al.: Personalized reason generation for explainable song recommendation. ACM Trans. Intell. Syst. Technol. (TIST) 10(4), 41 (2019) Zhao, G., et al.: Personalized reason generation for explainable song recommendation. ACM Trans. Intell. Syst. Technol. (TIST) 10(4), 41 (2019)
Metadata
Title
Explainable Recommendations in Intelligent Systems: Delivery Methods, Modalities and Risks
Authors
Mohammad Naiseh
Nan Jiang
Jianbing Ma
Raian Ali
Copyright Year
2020
DOI
https://doi.org/10.1007/978-3-030-50316-1_13

Premium Partner