Skip to main content
Top
Published in: Ethics and Information Technology 1/2018

04-10-2017 | Original Paper

Human-aligned artificial intelligence is a multiobjective problem

Authors: Peter Vamplew, Richard Dazeley, Cameron Foale, Sally Firmin, Jane Mummery

Published in: Ethics and Information Technology | Issue 1/2018

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

As the capabilities of artificial intelligence (AI) systems improve, it becomes important to constrain their actions to ensure their behaviour remains beneficial to humanity. A variety of ethical, legal and safety-based frameworks have been proposed as a basis for designing these constraints. Despite their variations, these frameworks share the common characteristic that decision-making must consider multiple potentially conflicting factors. We demonstrate that these alignment frameworks can be represented as utility functions, but that the widely used Maximum Expected Utility (MEU) paradigm provides insufficient support for such multiobjective decision-making. We show that a Multiobjective Maximum Expected Utility paradigm based on the combination of vector utilities and non-linear action–selection can overcome many of the issues which limit MEU’s effectiveness in implementing aligned AI. We examine existing approaches to multiobjective AI, and identify how these can contribute to the development of human-aligned intelligent agents.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Footnotes
1
For the purposes of this paper we will ignore the vital issue of who bears legal responsibility for the actions of an AI agent. For a broader discussion of the legal issues around AI see Leenes and Lucivero (2014) and the review of the literature in Section 10 of Mittelstadt et al. (2016).
 
2
A similar approach can also be applied in the context of utility functions which depend on both state and action, as in Eq. 2.
 
3
This problem would not arise if the Pareto front shown in Fig.  1 was convex rather than concave in shape. However many problems will naturally result in concave fronts and so it is important that an ethical AI can deal with such problems.
 
4
Note that depending on the structure of the utility functions, if f is non-linear then Eq. 7 may fail to result in the desired behaviour unless the state vector S also incorporates information about the utility history (Roijers et al. 2013).
 
5
We assume here for simplicity that all \(U_P\) terms have the same range.
 
6
Although in this context it is often referred to as multiattribute utility.
 
Literature
go back to reference Abel, D., MacGlashan, J., & Littman, M. L. (2016). Reinforcement learning as a framework for ethical decision making. In Workshops at the Thirtieth AAAI Conference on Artificial Intelligence. Phoenix. Abel, D., MacGlashan, J., & Littman, M. L. (2016). Reinforcement learning as a framework for ethical decision making. In Workshops at the Thirtieth AAAI Conference on Artificial Intelligence. Phoenix.
go back to reference Allen, C., & Wallach, W. (2012). Moral machines: Contradiction in terms or abdication of human responsibility. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 55–68). Boston: MIT Press. Allen, C., & Wallach, W. (2012). Moral machines: Contradiction in terms or abdication of human responsibility. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 55–68). Boston: MIT Press.
go back to reference Altmann, J. (2013). Arms control for armed uninhabited vehicles: An ethical issue. Ethics and Information Technology, 15(2), 137–152.CrossRef Altmann, J. (2013). Arms control for armed uninhabited vehicles: An ethical issue. Ethics and Information Technology, 15(2), 137–152.CrossRef
go back to reference Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems in AI safety. arXiv preprint arXiv:1606.06565. Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., & Mané, D. (2016). Concrete problems in AI safety. arXiv preprint arXiv:​1606.​06565.
go back to reference Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15. Anderson, M., & Anderson, S. L. (2007). Machine ethics: Creating an ethical intelligent agent. AI Magazine, 28(4), 15.
go back to reference Anderson, M., Anderson, S. L., & Armen, C. (2006a). An approach to computing ethics. IEEE Intelligent Systems, 21(4), 56–63.CrossRef Anderson, M., Anderson, S. L., & Armen, C. (2006a). An approach to computing ethics. IEEE Intelligent Systems, 21(4), 56–63.CrossRef
go back to reference Anderson, M., Anderson, S. L., & Armen, C. (2006b). Medethex: A prototype medical ethics advisor. In Proceedings of the National Conference On Artificial Intelligence, vol. 21, p. 1759. Anderson, M., Anderson, S. L., & Armen, C. (2006b). Medethex: A prototype medical ethics advisor. In Proceedings of the National Conference On Artificial Intelligence, vol. 21, p. 1759.
go back to reference Andrighetto, G., Governatori, G., Noriega, P., & van der Torre, L. W. (2013). Normative multi-agent systems (vol. 4). Wadern, Germany: Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.MATH Andrighetto, G., Governatori, G., Noriega, P., & van der Torre, L. W. (2013). Normative multi-agent systems (vol. 4). Wadern, Germany: Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.MATH
go back to reference Angus, D., & Woodward, C. (2009). Multiple objective ant colony optimisation. Swarm Intelligence, 3(1), 69–85.CrossRef Angus, D., & Woodward, C. (2009). Multiple objective ant colony optimisation. Swarm Intelligence, 3(1), 69–85.CrossRef
go back to reference Arkin, R. C. (2008). Governing lethal behavior: Embedding ethics in a hybrid deliberative/reactive robot architecture Part I: Motivation and philosophy, In 2008 3rd ACM/IEEE International Conference on Human–Robot Interaction (pp. 121–128). Arkin, R. C. (2008). Governing lethal behavior: Embedding ethics in a hybrid deliberative/reactive robot architecture Part I: Motivation and philosophy, In 2008 3rd ACM/IEEE International Conference on Human–Robot Interaction (pp. 121–128).
go back to reference Armstrong, S., Sandberg, A., & Bostrom, N. (2012). Thinking inside the box: Controlling and using an oracle AI. Minds and Machines, 22(4), 299–324.CrossRef Armstrong, S., Sandberg, A., & Bostrom, N. (2012). Thinking inside the box: Controlling and using an oracle AI. Minds and Machines, 22(4), 299–324.CrossRef
go back to reference Asaro, P. M. (2012). A body to kick, but still no soul to damn: Legal perspectives on robotics. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 169–186). Cambridge: MIT Press. Asaro, P. M. (2012). A body to kick, but still no soul to damn: Legal perspectives on robotics. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 169–186). Cambridge: MIT Press.
go back to reference Bentham, J. (1789). The principles of moral and legislation. Oxford: Oxford University Press. Bentham, J. (1789). The principles of moral and legislation. Oxford: Oxford University Press.
go back to reference Blythe, J. (1999). Decision-theoretic planning. AI Magazine, 20(2), 37. Blythe, J. (1999). Decision-theoretic planning. AI Magazine, 20(2), 37.
go back to reference Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press. Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: Oxford University Press.
go back to reference Broersen, J., Dastani, M., Hulstijn, J., & van der Torre, L. (2002). Goal generation in the BOID architecture. Cognitive Science Quarterly, 2(3–4), 428–447. Broersen, J., Dastani, M., Hulstijn, J., & van der Torre, L. (2002). Goal generation in the BOID architecture. Cognitive Science Quarterly, 2(3–4), 428–447.
go back to reference Brundage, M. (2014). Limitations and risks of machine ethics. Journal of Experimental & Theoretical Artificial Intelligence, 26(3), 355–372.CrossRef Brundage, M. (2014). Limitations and risks of machine ethics. Journal of Experimental & Theoretical Artificial Intelligence, 26(3), 355–372.CrossRef
go back to reference Castelfranchi, C., Dignum, F., Jonker, C. M., & Treur, J. (1999). Deliberative normative agents: Principles and architecture. In International Workshop on Agent Theories, Architectures, and Languages (pp. 364–378). New York: Springer. Castelfranchi, C., Dignum, F., Jonker, C. M., & Treur, J. (1999). Deliberative normative agents: Principles and architecture. In International Workshop on Agent Theories, Architectures, and Languages (pp. 364–378). New York: Springer.
go back to reference Coello Coello, C. (2006). Evolutionary multi-objective optimization: A historical view of the field. IEEE Computational Intelligence Magazine, 1(1), 28–36.MathSciNetCrossRef Coello Coello, C. (2006). Evolutionary multi-objective optimization: A historical view of the field. IEEE Computational Intelligence Magazine, 1(1), 28–36.MathSciNetCrossRef
go back to reference Critch, A. (2017). Toward negotiable reinforcement learning: Shifting priorities in Pareto optimal sequential decision-making. arXiv preprint arXiv:1701.01302. Critch, A. (2017). Toward negotiable reinforcement learning: Shifting priorities in Pareto optimal sequential decision-making. arXiv preprint arXiv:​1701.​01302.
go back to reference Cushman, F. (2013). Action, outcome, and value a dual-system framework for morality. Personality and Social Psychology Review, 17(3), 273–292.CrossRef Cushman, F. (2013). Action, outcome, and value a dual-system framework for morality. Personality and Social Psychology Review, 17(3), 273–292.CrossRef
go back to reference Danielson, P. (2009). Can robots have a conscience? Nature, 457(7229), 540–540.CrossRef Danielson, P. (2009). Can robots have a conscience? Nature, 457(7229), 540–540.CrossRef
go back to reference Das, I., & Dennis, J. E. (1997). A closer look at drawbacks of minimizing weighted sums of objectives for pareto set generation in multicriteria optimization problems. Structural Optimization, 14(1), 63–69.CrossRef Das, I., & Dennis, J. E. (1997). A closer look at drawbacks of minimizing weighted sums of objectives for pareto set generation in multicriteria optimization problems. Structural Optimization, 14(1), 63–69.CrossRef
go back to reference Dewey, D. (2011). Learning what to value. In International Conference on Artificial General Intelligence (pp. 309–314). New York: Springer. Dewey, D. (2011). Learning what to value. In International Conference on Artificial General Intelligence (pp. 309–314). New York: Springer.
go back to reference Dewey, D. (2014). Reinforcement learning and the reward engineering principle. In 2014 AAAI Spring Symposium Series. Dewey, D. (2014). Reinforcement learning and the reward engineering principle. In 2014 AAAI Spring Symposium Series.
go back to reference Dignum, F. (1996). Autonomous agents and social norms. In ICMAS-96 Workshop on Norms, Obligations and Conventions (pp. 56–71). Dignum, F. (1996). Autonomous agents and social norms. In ICMAS-96 Workshop on Norms, Obligations and Conventions (pp. 56–71).
go back to reference Dubois, D., Fargier, H., & Prade, H. (1997). Beyond min aggregation in multicriteria decision: (Ordered) Weighted min, discri-min, leximin. In The ordered weighted averaging operators (pp. 181–192). New York: Springer. Dubois, D., Fargier, H., & Prade, H. (1997). Beyond min aggregation in multicriteria decision: (Ordered) Weighted min, discri-min, leximin. In The ordered weighted averaging operators (pp. 181–192). New York: Springer.
go back to reference Eckhardt, D. E., Caglayan, A. K., Knight, J. C., Lee, L. D., McAllister, D. F., Vouk, M. A., et al. (1991). An experimental evaluation of software redundancy as a strategy for improving reliability. IEEE Transactions on Software Engineering, 17(7), 692–702.CrossRef Eckhardt, D. E., Caglayan, A. K., Knight, J. C., Lee, L. D., McAllister, D. F., Vouk, M. A., et al. (1991). An experimental evaluation of software redundancy as a strategy for improving reliability. IEEE Transactions on Software Engineering, 17(7), 692–702.CrossRef
go back to reference Etzioni, A., & Etzioni, O. (2016). Designing AI systems that obey our laws and values. Communications of the ACM, 59(9), 29–31.CrossRef Etzioni, A., & Etzioni, O. (2016). Designing AI systems that obey our laws and values. Communications of the ACM, 59(9), 29–31.CrossRef
go back to reference Ferrucci, D. A. (2012). Introduction to “This is Watson”. IBM Journal of Research and Development, 56(3.4), 1–1.CrossRef Ferrucci, D. A. (2012). Introduction to “This is Watson”. IBM Journal of Research and Development, 56(3.4), 1–1.CrossRef
go back to reference Fieldsend, J. E. (2004). Multi-objective particle swarm optimisation methods. Technical Report No. 419. Department of Computer Science, University of Exeter. Fieldsend, J. E. (2004). Multi-objective particle swarm optimisation methods. Technical Report No. 419. Department of Computer Science, University of Exeter.
go back to reference Goodall, N. (2014). Ethical decision making during automated vehicle crashes. Transportation Research Record: Journal of the Transportation Research Board, 2424, 58–65. Goodall, N. (2014). Ethical decision making during automated vehicle crashes. Transportation Research Record: Journal of the Transportation Research Board, 2424, 58–65.
go back to reference Guarini, M. (2006). Particularism and the classification and reclassification of moral cases. IEEE Intelligent Systems, 21(4), 22–28.CrossRef Guarini, M. (2006). Particularism and the classification and reclassification of moral cases. IEEE Intelligent Systems, 21(4), 22–28.CrossRef
go back to reference Kant, I. (1993). Grounding for the metaphysics of Morals (1797). Indianapolis: Hackett. Kant, I. (1993). Grounding for the metaphysics of Morals (1797). Indianapolis: Hackett.
go back to reference Leenes, R., & Lucivero, F. (2014). Laws on robots, laws by robots, laws in robots: Regulating robot behaviour by design. Law, Innovation and Technology, 6(2), 193–220.CrossRef Leenes, R., & Lucivero, F. (2014). Laws on robots, laws by robots, laws in robots: Regulating robot behaviour by design. Law, Innovation and Technology, 6(2), 193–220.CrossRef
go back to reference Lenat, D. B. (1983). Eurisko: A program that learns new heuristics and domain concepts: The nature of heuristics III: Program design and results. Artificial Intelligence, 21(1–2), 61–98.CrossRef Lenat, D. B. (1983). Eurisko: A program that learns new heuristics and domain concepts: The nature of heuristics III: Program design and results. Artificial Intelligence, 21(1–2), 61–98.CrossRef
go back to reference Littman, M. L. (2015). Reinforcement learning improves behaviour from evaluative feedback. Nature, 521(7553), 445–451.CrossRef Littman, M. L. (2015). Reinforcement learning improves behaviour from evaluative feedback. Nature, 521(7553), 445–451.CrossRef
go back to reference Livingston, S., Garvey, J., & Elhanany, I. (2008). On the broad implications of reinforcement learning based Agi. In Artificial General Intelligence, 2008: Proceedings of the First AGI Conference (p. 478, vol. 171). Amsterdam: IOS Press. Livingston, S., Garvey, J., & Elhanany, I. (2008). On the broad implications of reinforcement learning based Agi. In Artificial General Intelligence, 2008: Proceedings of the First AGI Conference (p. 478, vol. 171). Amsterdam: IOS Press.
go back to reference Lozano-Perez, T., Cox, I. J., & Wilfong, G. T. (2012). Autonomous robot vehicles. New York: Springer. Lozano-Perez, T., Cox, I. J., & Wilfong, G. T. (2012). Autonomous robot vehicles. New York: Springer.
go back to reference Meisner, E. M. (2009). Learning controllers for human–robot Interaction (PhD thesis, Rensselaer Polytechnic Institute, 2009). Meisner, E. M. (2009). Learning controllers for human–robot Interaction (PhD thesis, Rensselaer Polytechnic Institute, 2009).
go back to reference Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679.CrossRef Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679.CrossRef
go back to reference Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533.CrossRef Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., et al. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533.CrossRef
go back to reference Murphy VII, T. (2013). The first level of Super Mario Bros. Is easy with lexicographic orderings and time travel. The Association for Computational Heresy (SIGBOVIK). Murphy VII, T. (2013). The first level of Super Mario Bros. Is easy with lexicographic orderings and time travel. The Association for Computational Heresy (SIGBOVIK).
go back to reference Omohundro, S. M. (2008). The basic AI drives, In AGI (vol. 171, pp. 483–492). Omohundro, S. M. (2008). The basic AI drives, In AGI (vol. 171, pp. 483–492).
go back to reference Petraeus, D. H., & Amos, J. F. (2006). Fm 3-24: Counterinsurgency. Department of the Army. Petraeus, D. H., & Amos, J. F. (2006). Fm 3-24: Counterinsurgency. Department of the Army.
go back to reference Prakken, H. (2016). On how AI & law can help autonomous systems obey the law: A position paper. AI4J–Artificial Intelligence for Justice, 42, 42–46. Prakken, H. (2016). On how AI & law can help autonomous systems obey the law: A position paper. AI4J–Artificial Intelligence for Justice, 42, 42–46.
go back to reference Rawls, J. (1971). A theory of justice. Cambridge: Harvard University Press. Rawls, J. (1971). A theory of justice. Cambridge: Harvard University Press.
go back to reference Reynolds, G. (2011). Ethics in information technology. Boston: Cengage learning. Reynolds, G. (2011). Ethics in information technology. Boston: Cengage learning.
go back to reference Riedl, M. O., & Harrison, B. (2016). Using stories to teach human values to artificial agents. In Proceedings of the 2nd International Workshop on AI. Phoenix, AZ: Ethics and Society. Riedl, M. O., & Harrison, B. (2016). Using stories to teach human values to artificial agents. In Proceedings of the 2nd International Workshop on AI. Phoenix, AZ: Ethics and Society.
go back to reference Roijers, D. M., Vamplew, P., Whiteson, S., & Dazeley, R. (2013). A survey of multi-objective sequential decision-making. Journal of Artificial Intelligence Research, 48, 67–113.MathSciNetMATH Roijers, D. M., Vamplew, P., Whiteson, S., & Dazeley, R. (2013). A survey of multi-objective sequential decision-making. Journal of Artificial Intelligence Research, 48, 67–113.MathSciNetMATH
go back to reference Romei, A., & Ruggieri, S. (2014). A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review, 29(5), 582–638.CrossRef Romei, A., & Ruggieri, S. (2014). A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review, 29(5), 582–638.CrossRef
go back to reference Ross, W. D. (1930). The right and the good. Oxford: Clarendon Press. Ross, W. D. (1930). The right and the good. Oxford: Clarendon Press.
go back to reference Russell, S. J., & Norvig, P. (2010). Artificial intelligence: A modern approach (3rd ed.). Upper Saddle River: Prentice Hall.MATH Russell, S. J., & Norvig, P. (2010). Artificial intelligence: A modern approach (3rd ed.). Upper Saddle River: Prentice Hall.MATH
go back to reference Sharkey, N. (2009). Death strikes from the sky: The calculus of proportionality. IEEE Technology and Society Magazine, 28(1), 16–19.CrossRef Sharkey, N. (2009). Death strikes from the sky: The calculus of proportionality. IEEE Technology and Society Magazine, 28(1), 16–19.CrossRef
go back to reference Sharkey, N. (2012). Killing made easy: From joysticks to politics. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 111–128). Cambridge: MIT Press. Sharkey, N. (2012). Killing made easy: From joysticks to politics. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 111–128). Cambridge: MIT Press.
go back to reference Sharkey, N., & Sharkey, A. (2012). The rights and wrongs of robot care. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 267–282). Cambridge: MIT Press. Sharkey, N., & Sharkey, A. (2012). The rights and wrongs of robot care. In P. Lin, K. Abney, & G. A. Bekey (Eds.), Robot ethics: The ethical and social implications of robotics (pp. 267–282). Cambridge: MIT Press.
go back to reference Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484–489.CrossRef Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587), 484–489.CrossRef
go back to reference Soares, N., & Fallenstein, B. (2014). Aligning superintelligence with human interests: A technical research agenda. Machine Intelligence Research Institute (MIRI) technical report 8. Soares, N., & Fallenstein, B. (2014). Aligning superintelligence with human interests: A technical research agenda. Machine Intelligence Research Institute (MIRI) technical report 8.
go back to reference Soares, N., Fallenstein, B., Armstrong, S., & Yudkowsky, E. (2015). Corrigibility. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence. Soares, N., Fallenstein, B., Armstrong, S., & Yudkowsky, E. (2015). Corrigibility. In Workshops at the Twenty-Ninth AAAI Conference on Artificial Intelligence.
go back to reference Soh, H., & Demiris, Y. (2011). Evolving policies for multi-reward partially observable markov decision processes (mr-pomdps). In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation (pp. 713– 720). ACM. Soh, H., & Demiris, Y. (2011). Evolving policies for multi-reward partially observable markov decision processes (mr-pomdps). In Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation (pp. 713– 720). ACM.
go back to reference Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge: MIT Press. Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge: MIT Press.
go back to reference Tavani, H. T. (2011). Ethics and technology: Controversies, questions, and strategies for ethical computing. Hoboken: Wiley. Tavani, H. T. (2011). Ethics and technology: Controversies, questions, and strategies for ethical computing. Hoboken: Wiley.
go back to reference Taylor, J. (2016). Quantilizers: A safer alternative to maximizers for limited optimization. In AAAI AI, Ethics & Society Workshop. Taylor, J. (2016). Quantilizers: A safer alternative to maximizers for limited optimization. In AAAI AI, Ethics & Society Workshop.
go back to reference Taylor, J., Yudkowsky, E., LaVictoire, P., & Critch, A. (2016). Alignment for advanced machine learning systems. Technical report, Technical Report 20161, MIRI. Taylor, J., Yudkowsky, E., LaVictoire, P., & Critch, A. (2016). Alignment for advanced machine learning systems. Technical report, Technical Report 20161, MIRI.
go back to reference The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. (2016). Ethically aligned design: A vision for prioritizing wellbeing with artificial intelligence and autonomous systems. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. (2016). Ethically aligned design: A vision for prioritizing wellbeing with artificial intelligence and autonomous systems.
go back to reference Vamplew, P., Yearwood, J., Dazeley, R., & Berry, A. (2008). On the limitations of scalarisation for multi-objective reinforcement learning of Pareto Fronts. In AI’08: The 21st Australasian Joint Conference on Artificial Intelligence (pp. 372–378). Vamplew, P., Yearwood, J., Dazeley, R., & Berry, A. (2008). On the limitations of scalarisation for multi-objective reinforcement learning of Pareto Fronts. In AI’08: The 21st Australasian Joint Conference on Artificial Intelligence (pp. 372–378).
go back to reference Vamplew, P. (2004). Lego mindstorms robots as a platform for teaching reinforcement learning. In Proceedings of AISAT2004: International Conference on Artificial Intelligence in Science and Technology. Vamplew, P. (2004). Lego mindstorms robots as a platform for teaching reinforcement learning. In Proceedings of AISAT2004: International Conference on Artificial Intelligence in Science and Technology.
go back to reference Van Moffaert, K., Brys, T., Chandra, A., Esterle, L., Lewis, P. R., & Nowé, A. (2014). A novel adaptive weight selection algorithm for multi-objective multi-agent reinforcement learning, In 2014 International Joint Conference on Neural Networks (IJCNN) (pp. 2306–2314). Van Moffaert, K., Brys, T., Chandra, A., Esterle, L., Lewis, P. R., & Nowé, A. (2014). A novel adaptive weight selection algorithm for multi-objective multi-agent reinforcement learning, In 2014 International Joint Conference on Neural Networks (IJCNN) (pp. 2306–2314).
go back to reference Van Riemsdijk, M. B., Jonker, C. M., & Lesser, V. (2015). Creating socially adaptive electronic partners: Interaction, reasoning and ethical challenges. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (pp. 1201–1206). Van Riemsdijk, M. B., Jonker, C. M., & Lesser, V. (2015). Creating socially adaptive electronic partners: Interaction, reasoning and ethical challenges. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems (pp. 1201–1206).
go back to reference van Wynsberghe, A. (2016). Service robots, care ethics, and design. Ethics and Information Technology, 18, 311–321. van Wynsberghe, A. (2016). Service robots, care ethics, and design. Ethics and Information Technology, 18, 311–321.
go back to reference Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press. Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford: Oxford University Press.
go back to reference Wellman, M. P. (1985). Reasoning about preference models. Technical Report 340. Cambridge, MA: MIT Laboratory for Computer Science. Wellman, M. P. (1985). Reasoning about preference models. Technical Report 340. Cambridge, MA: MIT Laboratory for Computer Science.
go back to reference Yampolskiy, R. V., & Spellchecker, M. (2016). Artificial intelligence safety and cybersecurity: A timeline of AI failures. arXiv preprint arXiv:1610.07997. Yampolskiy, R. V., & Spellchecker, M. (2016). Artificial intelligence safety and cybersecurity: A timeline of AI failures. arXiv preprint arXiv:​1610.​07997.
Metadata
Title
Human-aligned artificial intelligence is a multiobjective problem
Authors
Peter Vamplew
Richard Dazeley
Cameron Foale
Sally Firmin
Jane Mummery
Publication date
04-10-2017
Publisher
Springer Netherlands
Published in
Ethics and Information Technology / Issue 1/2018
Print ISSN: 1388-1957
Electronic ISSN: 1572-8439
DOI
https://doi.org/10.1007/s10676-017-9440-6

Other articles of this Issue 1/2018

Ethics and Information Technology 1/2018 Go to the issue

Premium Partner