Abstract
The idea of Artificial Intelligence for Social Good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.
Authors “Luciano Floridi and Josh Cowls” have equally contributed to this chapter.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
While it is beyond present scope to adjudicate this for any particular case, it is important to acknowledge at the outset that in practice there is likely to be considerable disagreement and contention regarding what would constitute a socially good outcome.
- 2.
This should not be taken as necessitating a utilitarian calculation: the beneficial impact of a given project may be “offset” by the violation of some categorical imperative. Therefore even if an AI4SG project would do “more good than harm”, the harm may be ethically intolerable. In such a hypothetical case, one would not be morally obliged to develop and deploy the project in question.
- 3.
As noted in the introduction, we cannot hope to document every single ethical consideration for a social good project, so even the least novel factors here are those that take on new relevance in the context of AI.
- 4.
It is of course likely that in practice, an assessment of the safety of an AI system must also take into account wider societal values and cultural beliefs, for example, which may necessitate different trade-offs between the requirements of critical requirements like safety and other, potentially competing norms and expectations.
- 5.
While, for the sake of simplicity, our focus is on minimising the spread of information used to predict an outcome, we do not intend to foreclose on the suggestion, offered in Prasad (2018), that in some cases a fairer approach may be to maximise the available information and hence “democratise” the ability to manipulate predictors.
- 6.
For a discussion of the use of artificial intelligence in criminal acts more generally, see King et al. 2019.
- 7.
The four remaining dimensions proposed by MacFarlane—the source of the interruption, the method of expression, the channel of conveyance and the human activity changed by the interruption—are not relevant for purpose of this article.
- 8.
Note that the significance of involving domain experts in the process was not merely to improve their experience as decision recipients, but also for their unparalleled knowledge of the domain that the researchers drew upon in the system design, helping to provide the researchers with what Pagallo (2015) calls “preventive understanding” of the field.
- 9.
There is no suggestion that this is the intended use.
References
“AI for Good Global Summit—28–31 May 2019, Geneva, Switzerland”. n.d. AI for good global summit. https://aiforgood.itu.int/. Accessed 12 Apr 2019.
Al-Abdulkarim, Latifa, Katie Atkinson, and Trevor Bench-Capon. 2015. Factors, issues and values: Revisiting reasoning with cases. In Proceedings of the 15th International Conference on Artificial Intelligence and Law, 3–12. ICAIL ’15. New York, NY, USA: ACM. https://doi.org/10.1145/2746090.2746103.
Banjo, Omotayo. 2018. Bias in maternal AI could hurt expectant Black mothers. Medium (blog). September 21, 2018. https://medium.com/theplug/bias-in-maternal-ai-could-hurt-expectant-black-mothers-e41893438da6.
Baum, Seth D. 2017. Social choice ethics in artificial intelligence. AI & SOCIETY: 1–12.
Bilgic, Mustafa, and Raymond Mooney. 2005. Explaining recommendations: Satisfaction vs. promotion.
Boutilier, Craig. 2002. A POMDP formulation of preference elicitation problems. In Proceedings of the National Conference on Artificial Intelligence, May.
Burgess, Matt. 2017. NHS DeepMind deal broke data protection law, regulator rules. Wired UK, July 3, 2017. https://www.wired.co.uk/article/google-deepmind-nhs-royal-free-ico-ruling.
Burns, Alistair, and Peter Rabins. 2000. Carer burden in dementia. International Journal of Geriatric Psychiatry 15 (S1): S9–S13.
Caliskan, Aylin, Joanna J. Bryson, and Arvind Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science 356 (6334): 183–186. https://doi.org/10.1126/science.aal4230.
Carton, Samuel, Jennifer Helsby, Kenneth Joseph, Ayesha Mahmud, Youngsoo Park, Joe Walsh, Crystal Cody, CPT Estella Patterson, Lauren Haynes, and Rayid Ghani. 2016. Identifying police officers at risk of adverse events. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 67–76. KDD ’16. New York, NY, USA: ACM. https://doi.org/10.1145/2939672.2939698.
Center for Disease Control (CDC). 2019. Pregnancy Mortality Surveillance System | Maternal and Infant Health. January 16, 2019. https://www.cdc.gov/reproductivehealth/maternalinfanthealth/pregnancy-mortality-surveillance-system.htm.
Chajewska, Urszula, Daphne Koller, and Ronald Parr. 2000. Making rational decisions using adaptive utility elicitation. AAAI/IAAI: 363–369.
Chu, Yi, Young Chol Song, Richard Levinson, and Henry Kautz. 2012. Interactive activity recognition and prompting to assist people with cognitive disabilities. Journal of Ambient Intelligence and Smart Environments 4 (5): 443–459. https://doi.org/10.3233/AIS-2012-0168.
Crawford, Kate. 2016. Artificial intelligence’s White guy problem. The New York Times. June 25, 2016. https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html.
Dennis, Louise, Michael Fisher, Marija Slavkovik, and Matt Webster. 2016. Formal verification of ethical choices in autonomous systems. Robotics and Autonomous Systems 77 (March): 1–14. https://doi.org/10.1016/j.robot.2015.11.012.
Eicher, Bobbie, Lalith Polepeddi, and Ashok Goel. 2017. Jill Watson doesn’t care if you’re pregnant: grounding AI ethics in empirical studies. In AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, New Orleans, LA. Vol. 7.
Etzioni, Amitai. 1999. Enhancing privacy, preserving the common good. Hastings Center Report 29 (2): 14–23.
Faltings, Boi, Pearl Pu, Marc Torrens, and Paolo Viappiani. 2004. Designing example-critiquing interaction. In Proceedings of the 9th International Conference on Intelligent User Interfaces, 22–29. IUI ’04. New York, NY, USA: ACM. https://doi.org/10.1145/964442.964449.
Fang, Fei, Thanh H. Nguyen, Rob Pickles, Wai Y. Lam, Gopalasamy R. Clements, Bo An, Amandeep Singh, Milind Tambe, and Andrew Lemieux. 2016. Deploying PAWS: Field optimization of the protection assistant for wildlife security. In Twenty-Eighth IAAI Conference. https://www.aaai.org/ocs/index.php/IAAI/IAAI16/paper/view/11814.
Floridi, Luciano. 2012. Distributed morality in an information society. Science and Engineering Ethics 19 (3): 727–743. https://doi.org/10.1007/s11948-012-9413-4.
———. 2016. On human dignity as a foundation for the right to privacy. Philosophy & Technology 29 (4): 307–312. https://doi.org/10.1007/s13347-016-0220-8.
———. 2017. The logic of design as a conceptual logic of information. Minds Mach. 27 (3): 495–519. https://doi.org/10.1007/s11023-017-9438-1.
———. 2018. Semantic capital: Its nature, value, and curation. Philos Technol 31: 481–497. https://doi.org/10.1007/s13347-018-0335-1
Floridi, Luciano, and Josh Cowls. 2019. A unified framework of five principles for AI in society. Harvard Data Science Review 1 (1). https://doi.org/10.1162/99608f92.8cd550d1.
Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, and Francesca Rossi. 2018. AI4People—An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines 28 (4): 689–707.
Friedman, Batya, and Helen Nissenbaum. 1996. Bias in computer systems. ACM Transactions on Information Systems 14: 330–347. https://doi.org/10.1145/230538.230561.
Ghani, Rayid. 2016. You Say you want transparency and interpretability? Rayid Ghani (blog). April 29, 2016. http://www.rayidghani.com/you-say-you-want-transparency-and-interpretability.
Goel, Ashok, Brian Creeden, Mithun Kumble, Shanu Salunke, Abhinaya Shetty, and Bryan Wiltgen. 2015. Using Watson for enhancing human-computer co-creativity. In 2015 AAAI Fall Symposium Series.
Goodhart, Charles. 1975. Problems of monetary management: The U.K. experience. Papers in monetary economics. Sydney? Reserve Bank of Australia.
Gregor, Shirley, and Izak Benbasat. 1999. Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS Quarterly 23 (December): 497–530. https://doi.org/10.2307/249487.
Hager, Gregory D., Ann Drobnis, Fei Fang, Rayid Ghani, Amy Greenwald, Terah Lyons, David C. Parkes, et al. 2017. Artificial intelligence for social good, 24–24.
Haque, Albert, Michelle Guo, Alexandre Alahi, Serena Yeung, Zelun Luo, Alisha Rege, Jeffrey Jopling, et al. 2017. Towards vision-based smart hospitals: A system for tracking and monitoring hand hygiene compliance. August. https://arxiv.org/abs/1708.00163v3.
Henry, Katharine E., David N. Hager, Peter J. Pronovost, and Suchi Saria. 2015. A Targeted Real-Time Early Warning Score (TREWScore) for septic shock. Science Translational Medicine 7 (299): 299ra122–299ra122. https://doi.org/10.1126/scitranslmed.aab3719.
Herlocker, Jonathan L., Joseph A. Konstan, and John Riedl. 2000. Explaining collaborative filtering recommendations. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, 241–250. ACM.
Kaye, Jane, Edgar A. Whitley, David Lund, Michael Morrison, Harriet Teare, and Karen Melham. 2015. Dynamic consent: A patient interface for twenty-first century research networks. European Journal of Human Genetics 23 (2): 141–146. https://doi.org/10.1038/ejhg.2014.71.
Kerr, Ian R. 2003. Bots, babes and the Californication of commerce. University of Ottawa Law and Technology Journal 1 (January).
King, Thomas C., Nikita Aggarwal, Mariarosaria Taddeo, and Luciano Floridi. 2019. Artificial intelligence crime: An interdisciplinary analysis of foreseeable threats and solutions. Science and Engineering Ethics. https://doi.org/10.1007/s11948-018-00081-0.
Lakkaraju, Himabindu, Everaldo Aguiar, Carl Shan, David Miller, Nasir Bhanpuri, Rayid Ghani, and Kecia L. Addison. 2015. A machine learning framework to identify students at risk of adverse academic outcomes. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1909–1918. ACM.
Lu, Haonan, Mubarik Arshad, Andrew Thornton, Giacomo Avesani, Paula Cunnea, Ed Curry, Fahdi Kanavati, et al. 2019. A mathematical-descriptor of tumor-mesoscopic-structure from computed-tomography images annotates prognostic- and molecular-phenotypes of epithelial ovarian cancer. Nature Communications 10 (1): 764. https://doi.org/10.1038/s41467-019-08718-9.
Lum, Kristian, and William Isaac. 2016. To predict and serve? Significance 13 (5): 14–19. https://doi.org/10.1111/j.1740-9713.2016.00960.x.
Lynskey, Orla. 2015. The foundations of EU data protection law, Oxford Studies in European Law. Oxford: Oxford University Press.
Manheim, David, and Scott Garrabrant. 2019. Categorizing variants of Goodhart’s law. ArXiv:1803.04585 [Cs, q-Fin, Stat], February. http://arxiv.org/abs/1803.04585.
Martinez-Miranda, Juan, and Arantza Aldea. 2005. Emotions in human and artificial intelligence. Computers in Human Behavior 21 (2): 323–341. https://doi.org/10.1016/j.chb.2004.02.010.
McFarlane, Daniel. 1999. Interruption of people in human-computer interaction: A general unifying definition of human interruption and taxonomy. August.
McFarlane, Daniel, and Kara Latorella. 2002. The scope and importance of human interruption in human-computer interaction design. Human-Computer Interaction 17 (March): 1–61. https://doi.org/10.1207/S15327051HCI1701_1.
Mohanty, Suchitra, and Rahul Bhatia. 2017. Indian Court’s privacy ruling is blow to government. Reuters, August 25, 2017. https://www.reuters.com/article/us-india-court-privacy-idUSKCN1B40CE.
Moore, Jared. 2019. AI for not bad. Front. Big Data 2 (32). https://doi.org/10.3389/fdata.2019.00032.
Neff, Gina, and Peter Nagy. 2016. Talking to bots: Symbiotic agency and the case of Tay. International Journal of Communication 10 (October): 4915–4931.
Nijhawan, Lokesh P, Manthan Janodia, Muddu Krishna, Kishore Bhat, Laxminarayana Bairy, Nayanabhirama Udupa, and Prashant Musmade. 2013. Informed consent: Issues and challenges. 4. https://doi.org/10.4103/2231-4040.116779.
Nissenbaum, Helen. 2009. Privacy in context: technology, policy, and the integrity of social life. Stanford: Stanford University Press.
———. 2011. A contextual approach to privacy online. Daedalus 140 (4): 32–48.
Pagallo, Ugo. 2015. “Good onlife governance: On law, spontaneous orders, and design.” In The Onlife Manifesto: Being human in a hyperconnected era Luciano Floridi, 161–177. Cham: Springer. https://doi.org/10.1007/978-3-319-04093-6_18.
———. 2017. From automation to autonomous systems: A legal phenomenology with problems of accountability. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17), 17–23.
Pedreshi, Dino, Salvatore Ruggieri, and Franco Turini. 2008. Discrimination-aware data mining, 560–568. ACM. https://doi.org/10.1145/1401890.1401959.
Prasad, Mahendra. 2018. Social choice and the value alignment problem. In Artificial intelligence safety and security, 291–314. Chapman and Hall/CRC: New York.
Price, W. Nicholson, and I. Glenn Cohen. 2019. Privacy in the age of medical big data. Nature Medicine 25 (1): 37. https://doi.org/10.1038/s41591-018-0272-7.
Reed, Chris. 2018. How should we regulate artificial intelligence? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376 (2128): 20170360.
Ross, Casey, and Ike Swetlitz. 2017. IBM pitched Watson as a revolution in cancer care. It’s nowhere close. STAT. September 5, 2017. https://www.statnews.com/2017/09/05/watson-ibm-cancer/.
“Royal Free—Google DeepMind Trial Failed to Comply with Data Protection Law”. 2017. Information Commissioner’s Office. July 3, 2017. https://ico.org.uk/about-the-ico/news-and-events/news-and-blogs/2017/07/royal-free-google-deepmind-trial-failed-to-comply-with-data-protection-law/.
Shortliffe, Edward H., and Bruce G. Buchanan. 1975. A model of inexact reasoning in medicine. Mathematical Biosciences 23 (3): 351–379. https://doi.org/10.1016/0025-5564(75)90047-4.
Solove, Daniel J. 2008. Understanding privacy. Vol. 173. Cambridge: Harvard University Press.
Strathern, Marilyn. 1997. ‘Improving ratings’: Audit in the British University System. European Review 5 (3): 305–321. https://doi.org/10.1002/(SICI)1234-981X(199707)5:3<305::AID-EURO184>3.0.CO;2-4.
Strickland, Eliza. 2019. How IBM Watson overpromised and underdelivered on AI health care. IEEE Spectrum: Technology, Engineering, and Science News. February 4, 2019. https://spectrum.ieee.org/biomedical/diagnostics/how-ibm-watson-overpromised-and-underdelivered-on-ai-health-care.
Swearingen, Kirsten, and Rashmi Sinha. 2002. Interaction design for recommender systems. Designing Interactive Systems 6: 312–334.
Tabuchi, Hiroko, and David Gelles. 2019. Doomed boeing jets lacked 2 safety features that company sold only as extras. The New York Times, April 5, 2019, sec. Business. https://www.nytimes.com/2019/03/21/business/boeing-safety-features-charge.html.
Taddeo, Mariarosaria. 2015. The struggle between liberties and authorities in the information age. Science and Engineering Ethics 21 (5): 1125–1138. https://doi.org/10.1007/s11948-014-9586-0.
———. 2017. Trusting digital technologies correctly. Minds and Machines 27 (4): 565–568.
Taddeo, Mariarosaria, and Luciano Floridi. 2011. The case for e-trust. Ethics and Information Technology 13 (1): 1–3.
———. 2015. The debate on the moral responsibilities of online service providers. Science and Engineering Ethics. https://doi.org/10.1007/s11948-015-9734-1.
———. 2018a. How AI can be a force for good. Science 361 (6404): 751–752.
———. 2018b. Regulate artificial intelligence to avert cyber arms race. Nature 556 (7701): 296. https://doi.org/10.1038/d41586-018-04602-6.
Taylor, Linnet, and Dennis Broeders. 2015. In the name of development: Power, profit and the datafication of the global south. Geoforum 64: 229–237.
The Economist. 2014. Waiting on hold—Ebola and big data. October 27, 2014. https://www.economist.com/science-and-technology/2014/10/27/waiting-on-hold.
Thelisson, Eva, Kirtan Padh, and L. Elisa Celis. 2017. Regulatory Mechanisms and algorithms towards trust in AI/ML. In Proceedings of the IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), Melbourne, Australia.
Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi. 2016. Why a right to explanation of automated decision-making does not exist in the general data protection regulation.” SSRN Scholarly Paper ID 2903469. Rochester: Social Science Research Network. https://papers.ssrn.com/abstract=2903469.
———. 2017. Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law 7 (2): 76–99.
Wang, Yilun, and Michal Kosinski. 2018. Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. Journal of Personality and Social Psychology 114 (2): 246.
Watson, David S., Jenny Krutzinna, Ian N. Bruce, Christopher E.M. Griffiths, Iain B. McInnes, Michael R. Barnes, and Luciano Floridi. 2019. Clinical applications of machine learning algorithms: Beyond the black box. BMJ 364 (March): l886. https://doi.org/10.1136/bmj.l886.
White, Geoff. 2018. Child advice chatbots fail sex abuse test. December 11, 2018, sec. Technology. https://www.bbc.com/news/technology-46507900.
Yadav, Amulya, Hau Chan, Albert Jiang, Eric Rice, Ece Kamar, Barbara Grosz, and Milind Tambe. 2016a. POMDPs for assisting homeless shelters—Computational and deployment challenges. In Autonomous agents and multiagent systems, Lecture Notes in Computer Science, ed. Nardine Osman and Carles Sierra, 67–87. Springer.
Yadav, Amulya, Hau Chan, Albert Xin Jiang, Haifeng Xu, Eric Rice, and Milind Tambe. 2016b. Using social networks to aid homeless shelters: dynamic influence maximization under uncertainty. In Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems, 740–748. International Foundation for Autonomous Agents and Multiagent Systems.
Yadav, Amulya, Bryan Wilder, Eric Rice, Robin Petering, Jaih Craddock, Amanda Yoshioka-Maxwell, Mary Hemler, Laura Onasch-Vera, Milind Tambe, and Darlene Woo. 2018. Bridging the gap between theory and practice in influence maximization: Raising awareness about HIV among homeless youth. IJCAI: 5399–5403.
Yang, Guang-Zhong, Jim Bellingham, Pierre E. Dupont, Peer Fischer, Luciano Floridi, Robert Full, Neil Jacobstein, et al. 2018. The grand challenges of science robotics. Science Robotics 3 (14): eaar7650. https://doi.org/10.1126/scirobotics.aar7650.
Zhou, Wei, and Gaurav Kapoor. 2011. Detecting evolutionary financial statement fraud. Decision Support Systems, On Quantitative Methods for Detection of Financial Fraud 50 (3): 570–575. https://doi.org/10.1016/j.dss.2010.08.007.
Funding
Floridi’s and Taddeo’s work was supported by Privacy and Trust Stream—Social lead of the PETRAS Internet of Things research hub—PETRAS is funded by the Engineering and Physical Sciences Research Council (EPSRC), grant agreement no. EP/N023013/1—and by the Oxford Initiative on AI for SDG, which is also supported by grants from Facebook, Google, and Microsoft. Cowls is the recipient of a Doctoral Studentship from the Alan Turing Institute. King’s work was supported by a grant by Google UK Limited.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix: Representative AI4SG Examples
Appendix: Representative AI4SG Examples
In the table below, we list the seven initiatives from our wider sample that are especially representative in terms of scope, variety, impact, and for their potentiality to evince the factors that should characterise the design of AI4SG projects. This includes the factor(s) that were identified as a result of our analysis of each project.
Name | References | Areas | Relevant factor(s) | |
---|---|---|---|---|
A | Field Optimization of the Protection Assistant for Wildlife Security. | Fang et al. (2016) | Environmental sustainability | 1), 3) |
B | Identifying Students at Risk of Adverse Academic Outcomes | Lakkaraju et al. (2015) | Education | 4) |
C | Health Information for Homeless Youth to Reduce the Spread of HIV | Poverty, public welfare, public health | 4) | |
D | Interactive activity recognition and prompting to assist people with cognitive disabilities | Chu et al. (2012) | Disability, public health | 3), 4), 7) |
E | Virtual teaching assistant experiment | Eicher et al. (2017) | Education | 4), 6) |
F | Detecting evolutionary financial statement fraud | Zhou and Kapoor (2011) | Finance, crime | 2) |
G | Tracking and monitoring hand hygience compliance | Haque et al. (2017) | Health | 5) |
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Floridi, L., Cowls, J., King, T.C., Taddeo, M. (2021). How to Design AI for Social Good: Seven Essential Factors. In: Floridi, L. (eds) Ethics, Governance, and Policies in Artificial Intelligence. Philosophical Studies Series, vol 144. Springer, Cham. https://doi.org/10.1007/978-3-030-81907-1_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-81907-1_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-81906-4
Online ISBN: 978-3-030-81907-1
eBook Packages: Religion and PhilosophyPhilosophy and Religion (R0)