skip to main content
research-article
Free Access

The effects of mixing machine learning and human judgment

Published:24 October 2019Publication History
Skip Abstract Section

Abstract

Collaboration between humans and machines does not necessarily lead to better outcomes.

References

  1. Angwin, J., Larson, J. Machine bias. ProPublica (May 23, 2016).Google ScholarGoogle Scholar
  2. Case, N. How to become a centaur. J. Design and Science (Jan. 2018).Google ScholarGoogle Scholar
  3. Chouldechova, A. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data 5, 2 (2017), 153--163.Google ScholarGoogle ScholarCross RefCross Ref
  4. Corbett-Davies, S., Pierson, E., Feller, A., Goel, S. and Huq, A. Algorithmic decision making and the cost of fairness. In Proceedings of the 23rd ACM SIGKDD Intern. Conf. Knowledge Discovery and Data Mining. ACM Press, 2017, 797--806.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Critcher, C.R. and Gilovich, T. Incidental environmental anchors. J. Behavioral Decision Making 21, 3 (2008), 241--251.Google ScholarGoogle ScholarCross RefCross Ref
  6. Dressel, J. and Farid, H. The accuracy, fairness, and limits of predicting recidivism. Science Advances 4, 1 (2018), eaao5580.Google ScholarGoogle ScholarCross RefCross Ref
  7. Englich, B., Mussweiler, T. and Strack, F. Playing dice with criminal sentences: the influence of irrelevant anchors on experts' judicial decision making. Personality and Social Psychology Bulletin 32, 2 (2006), 188--200.Google ScholarGoogle ScholarCross RefCross Ref
  8. Furnham, A. and Boo, H.C. A literature review of the anchoring effect. The J. Socio-Economics 40, 1 (2011), 35--42.Google ScholarGoogle ScholarCross RefCross Ref
  9. Goldstein, I.M., Lawrence, J. and Miner, A.S. Human-machine collaboration in cancer and beyond: The Centaur Care Model. JAMA Oncology 3, 10 (2017), 1303.Google ScholarGoogle Scholar
  10. Green, K.C. and Armstrong, J.S. Evidence on the effects of mandatory disclaimers in advertising. J. Public Policy & Marketing 31, 2 (2012), 293--304.Google ScholarGoogle ScholarCross RefCross Ref
  11. Horvitz, E. and Paek, T. Complementary computing: policies for transferring callers from dialog systems to human receptionists. User Modeling and User-Adapted Interaction 17, 1--2 (2007), 159--182.Google ScholarGoogle ScholarCross RefCross Ref
  12. Johnson, R.C. Overcoming AI bias with AI fairness. Commun. ACM (Dec. 6, 2018).Google ScholarGoogle Scholar
  13. Jukier, R. Inside the judicial mind: exploring judicial methodology in the mixed legal system of Quebec. European J. Comparative Law and Governance (Feb. 2014).Google ScholarGoogle Scholar
  14. Kahneman, D. Thinking, Fast and Slow. Farrar, Straus and Giroux, 2011.Google ScholarGoogle Scholar
  15. Mussweiler, T. and Strack, F. Numeric judgments under uncertainty: the role of knowledge in anchoring. J. Experimental Social Psychology 36, 5 (2000), 495--518.Google ScholarGoogle ScholarCross RefCross Ref
  16. Northcraft, G.B. and Neale, M.A. Experts, amateurs, and real estate: an anchoring-and-adjustment perspective on property pricing decisions. Organizational Behavior and Human Decision Processes 39, 1 (1987), 84--97.Google ScholarGoogle ScholarCross RefCross Ref
  17. Shaw, A.D., Horton, J.J. and Chen, D.L. Designing incentives for inexpert human raters. In Proceedings of the ACM Conf. Computer-supported Cooperative Work. ACM Press, 2011, 275--284.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. State v Loomis, 2016.Google ScholarGoogle Scholar
  19. Tversky, A. and Kahneman, D. Judgment under uncertainty: Heuristics and biases. Science 185, 4157 (1974), 1124--1131.Google ScholarGoogle ScholarCross RefCross Ref
  20. Wansink, B., Kent, R.J. and Hoch, S.J. An anchoring and adjustment model of purchase quantity decisions. J. Marketing Research 35, 1 (1998), 71.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. The effects of mixing machine learning and human judgment

      Recommendations

      Reviews

      Jonathan K. Millen

      Automated risk assessment systems are often used in situations that require human judgment. One motivation for doing this is to remove human bias. Even when the automated system has been shown to be more accurate than human assessments, a team combining system and human decisions has occasionally been proven to be better for some applications and collaboration modes. The presented experiment involves criminal recidivism assessments using a well-known algorithmic system, COMPAS. Human subjects were recruited according to their interest, rather than their expertise, in criminal justice. As expected, COMPAS was more accurate by itself than the humans, given the same data from real court cases. The pertinent question, however, is whether, and in what way, the human results are influenced by being told the COMPAS results prior to making their own assessments. In the first trial, humans were told the COMPAS recidivism risk scores, and their own scores were (on the average) different and less accurate. In the second trial, the experimenters investigated an "anchoring" effect by providing COMPAS results that were deliberately altered higher or lower. The average human scores differed significantly in the direction of the altered COMPAS scores that they were given. These results are not world shaking, and the article is short, less than seven pages. Yet it was as absorbing as a mystery novel, and raised all sorts of questions that aroused the hope of further work. For example, the cited success of teaming involves a feedback loop between the humans and the system, which was not tried here. Also, would the experiment have come out differently with expert humans on the team There is so much more to learn.

      Access critical reviews of Computing literature here

      Become a reviewer for Computing Reviews.

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      • Published in

        cover image Communications of the ACM
        Communications of the ACM  Volume 62, Issue 11
        November 2019
        136 pages
        ISSN:0001-0782
        EISSN:1557-7317
        DOI:10.1145/3368886
        Issue’s Table of Contents

        Copyright © 2019 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 24 October 2019

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article
        • Popular
        • Refereed

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format