Skip to main content
Log in

Toward an Ethics of AI Assistants: an Initial Framework

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The quotes come from John McCarthy ‘What is Artificial Intelligence? Basic Questions’ available at http://www-formal.stanford.edu/jmc/whatisai/node1.html – note that this quote and the quote from Russell and Norvig was originally sourced through Scherer 2016

  2. Indeed, this literature is explicitly invoked by many of the critics of AI assistance e.g. Carr 2014, Krakauer 2016, and Crawford 2015.

  3. A reviewer wonders, for example, why I do not discuss the consequences of using AI assistants to outsource moral decision-making. There are several reasons for this. The most pertinent is that I have discussed moral outsourcing as a specific problem in another paper (Danaher 2016b) and, as I point out in that paper, I suspect discussions of moral outsourcing to AI will raise similar issues to those already discussed in the expansive literature on the use of enhancement technologies to improve moral decision-making (for a similar analysis, coupled with a defence of the use of AI moral assistance, see Giublini and Savulescu 2018). That said, some of what I say below about degeneration, autonomy and interpersonal virtue will be also be relevant to debates about the use of moral AI assistance.

  4. I am indebted to Miles Brundage for suggesting this line of argument to me. We write about it in more detail on my webpage: https://philosophicaldisquisitions.blogspot.com/2017/05/cognitive-scarcity-and-artificial.html

  5. I am indebted to an anonymous reviewer for suggesting the distinction between personalization and manipulation. As they pointed out, personalization also has costs, e.g. a filter bubble that serves to reinforce prejudices, that may not be desirable in a pluralistic, democratic society, but it’s not clear that those problems are best understood in terms of a threat to autonomy. Cass Sunstein’s #Republic (2017) explores the political fallout of filter bubbles in more detail.

  6. As Hare and Vincent point out, while humans may be bad at predicting whether a future option will makes us happy, our judgment as to whether a chosen option has made us happy is, effectively, incorrigible. Nobody knows better than ourselves. It is to this latter type of judgment that I appeal in this argument.

  7. As a reviewer points out, it may be impossible for interpersonal communication to ever adequately capture one’s true feelings. This may well be right but if so it would seem to be a problem for both automated and non-automated communications alike.

References

  • Burgos, D, Van Nimwegen, C, Van Oostendorp, H. and Koper, R. (2007). Game-based learning and immediate feedback. The case study of the Planning Educational Task. International Journal of Advanced Technology in Learning Available at http://hdl.handle.net/1820/945 (accessed 29/11/2016).

  • Burrell, J. (2016). How the machine thinks: Understanding opacity in machine learning systems. Big Data and Society. https://doi.org/10.1177/2053951715622512.

  • Carr, N. (2014). The glass cage: Where automation is taking us. London: The Bodley Head.

    Google Scholar 

  • Crawford, M. (2015). The world beyond your head. New York: Farrar, Strauss and Giroux.

    Google Scholar 

  • Danaher, J. (2016a). The threat of algocracy: Reality, resistance and accommodation. Philosophy and Technology, 29(3), 245–268.

    Article  Google Scholar 

  • Danaher, J. (2016b). Why internal moral enhancement might be politically better than external moral enhancement. Neuroethics. https://doi.org/10.1007/s12152-016-9273-8

  • Dworkin, G. (1988). The theory and practice of autonomy. Cambridge: CUP.

    Book  Google Scholar 

  • Frankfurt, H. (1971). Freedom of the will and the concept of a person. Journal of Philosophy, 68, 5–20.

  • Frischmann, B. (2014). Human-focused Turing tests: A framework for judging nudging and the techno-social engineering of humans. Cardozo Legal Studies Research Paper No. 441 - available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2499760 (accessed 29/11/2016).

  • Giublini, A., & Savulescu, J. (2018). The Artificial Moral Advisor. The 'Ideal Observer' meets Artificial Intelligence. Philosophy and Technology, 31(2):169–188.

    Article  Google Scholar 

  • Hare, S., & Vincent, N. (2016). Happiness, cerebroscopes and incorrigibility: Prospects for Neuroeudaimonia. Neuroethics, 9(1), 69–84.

    Article  Google Scholar 

  • Heersmink, R. (2015). Extended mind and cognitive enhancement: Moral aspects of extended cognition. Phenomenal Cognitive Science. https://doi.org/10.1007/s11097-015-9448-5.

  • Heersmink, R. (2013). A taxonomy of cognitive artifacts: Function, information and categories. Review of Philosophical Psychology, 4(3), 465–481.

    Article  Google Scholar 

  • Kelly, S., & Dreyfus, H. (2011). All things shining. New York: Free Press.

    Google Scholar 

  • Kirsh, D. (2010). Thinking with external representations. AI and Society, 25, 441–454.

    Article  Google Scholar 

  • Kirsh, D. (1995). The intelligent use of space. Artificial Intelligence, 73, 31–68.

    Article  Google Scholar 

  • Krakauer, D. (2016). Will AI harm us? Better to ask how we’ll reckon with our hybrid nature. Nautilus 6 September 2016 - available at http://nautil.us/blog/will-ai-harm-us-better-to-ask-how-well-reckon-with-our-hybrid-nature (accessed 29/11/2016).

  • Luper, S. (2014). Life’s meaning. In Luper (Ed.), The Cambridge Companion to Lie and Death. Cambridge: Cambridge University Press.

    Chapter  Google Scholar 

  • Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data and Society. https://doi.org/10.1177/2053951716679679.

  • Morozov, E. (2013). The real privacy problem. MIT Technology Review. Available at http://www.technologyreview.com/featuredstory/520426/the-real-privacy-problem/ (accessed 29/11/16).

  • Mullainathan, S. and Shafir, E. (2014) Freeing up intelligence. Scientific American Mind Jan/Feb: 58–63.

  • Mullainathan, S., & Shafir, E. (2012). Scarcity: The true cost of not having enough. London: Penguin.

    Google Scholar 

  • Nagel, S. (2010). Too much of a good thing? Enhancement and the burden of self-determination. Neuroethics, 3, 109–119.

    Article  Google Scholar 

  • Nass, C. and Flatow, I. (2013) The myth of multitasking. NPR: Talk of the Nation 10 May 2013 - available at http://www.npr.org/2013/05/10/182861382/the-myth-of-multitasking (accessed 29/11/2016).

  • van Nimwegen, C., Burgos, D., Oostendorp, H and Schijf, H. (2006). The paradox of the assisted user: Guidance can be counterproductive. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems 917–926.

  • Newport, C. (2016). Deep Work. New York: Grand Central Publishing.

    Google Scholar 

  • Norman, D. (1991). Cognitive artifacts. In J. M. Carroll (Ed.), Designing interaction: Psychology at the human-computer interface. Cambridge: Cambridge University Press.

    Google Scholar 

  • Ophir, E., Nass, C., & Wagner, A. (2009). Cognitive control in media multitaskers. PNAS, 107(37), 15583–15587.

    Article  Google Scholar 

  • Pinker, S. (2010). The cognitive niche: Coevolution of intelligence, sociality, and language. PNAS, 107(Suppl 2), 8993–8999.

    Article  Google Scholar 

  • Plato. The Phaedrus. From Plato in Twelve Volumes, Vol. 9, translated by Harold N. Fowler. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1925. Available at http://www.english.illinois.edu/-people-/faculty/debaron/482/482readings/phaedrus.html (accessed 29/11/2016).

  • Raz, J. (1986). The morality of freedom. Oxford: OUP.

    Google Scholar 

  • Russell, S. and Norvig, P. (2016) Artificial intelligence: A modern approach (Global 3rd edition). Essex: Pearson.

  • Sandel, M. (2012). What money can’t buy: The moral limits of markets. London: Penguin.

    Google Scholar 

  • Scheibehenne, B., Greifeneder, R., & Todd, P. M. (2010). Can there ever be too many options? A meta-analytic review of choice overload. Journal of Consumer Research, 37, 409–425.

    Article  Google Scholar 

  • Scherer, M. (2016). Regulating artificial intelligence systems: Challenges, competencies and strategies. Harvard Journal of Law and Technology, 29(2), 354–400.

    Google Scholar 

  • Schwartz, B. (2004). The paradox of choice: Why less is more. New York, NY: Harper Collins.

    Google Scholar 

  • Selinger, E. and Frischmann, B. (2016). The dangers of Smart Communication Technology. The Arc Mag 13 September 2016 - available at https://thearcmag.com/the-danger-of-smart-communication-technology-c5d7d9dd0f3e#.3yuhicpw8 (accessed 29/11/2016).

  • Selinger, E. (2014a). Today’s Apps are Turning us Into Sociopaths. WIRED 26 February 2014 - available at https://www.wired.com/2014/02/outsourcing-humanity-apps/ (accessed 29/11/2016).

  • Selinger, E. (2014b). Don’t outsource your dating Life. CNN: Edition 2 May 2014 - available at http://edition.cnn.com/2014/05/01/opinion/selinger-outsourcing-activities/index.html (accessed 29/11/2016).

  • Selinger, E. (2014c). Outsourcing Your Mind and Intelligence to Computer/Phone Apps. Institute for Ethics and Emerging Technologies 8 April 2014 - available at http://ieet.org/index.php/IEET/more/selinger20140408 (accessed 29/11/2014).

  • Shah, A. K., Mullainathan, S., & Shafir, E. (2012). Some consequences of having too little. Science, 338, 682–685.

    Article  Google Scholar 

  • Slamecka, N., & Graf, P. (1978). The generation effect: The delineation of a phenomenon. Journal of Experimental Psychology: Human Learning and Memory., 4(6), 592–604.

    Google Scholar 

  • Smuts, A. (2013). The good cause account of the meaning of life. Southern Philosophy Journal, 51(4), 536–562.

    Article  Google Scholar 

  • Sunstein, C. (2016). The ethics of influence. Cambridge, UK: Cambridge University Press.

    Book  Google Scholar 

  • Sunstein, C. (2017). # Republic: Divided democracy in an age of social media. Princeton, NJ: Princeton University Press.

    Book  Google Scholar 

  • Thaler, R., & Sunstein, C. (2009). Nudge: Improving decisions about health, wealth and happiness. London: Penguin.

    Google Scholar 

  • Wertheimer, A. (1987). Coercion. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Whitehead, A. N. (1911). An introduction to mathematics. London: Williams and Norgate.

    Google Scholar 

  • Wu, T. (2017). The Attention Merchants. New York: Atlantica.

    Google Scholar 

  • Yeung, K. (2017). ‘Hypernudge’: Big data as a mode of regulation by design. Information, Communication and Society, 20(1), 118–136.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Danaher.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Danaher, J. Toward an Ethics of AI Assistants: an Initial Framework. Philos. Technol. 31, 629–653 (2018). https://doi.org/10.1007/s13347-018-0317-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-018-0317-3

Keywords

Navigation