Abstract
In this paper we examine the case of Tay, the Microsoft AI chatbot that was launched in March, 2016. After less than 24 hours, Microsoft shut down the experiment because the chatbot was generating tweets that were judged to be inappropriate since they included racist, sexist, and anti-Semitic language. We contend that the case of Tay illustrates a problem with the very nature of learning software (LS is a term that describes any software that changes its program in response to its interactions) that interacts directly with the public, and the developer's role and responsibility associated with it. We make the case that when LS interacts directly with people or indirectly via social media, the developer has additional ethical responsibilities beyond those of standard software. There is an additional burden of care.
Index Terms
- Why we should have seen that coming: comments on Microsoft's tay "experiment," and wider implications
Recommendations
Moral luck and computer ethics: Gauguin in cyberspace
I argue that the problem of `moral luck' is an unjustly neglected topic within Computer Ethics. This is unfortunate given that the very nature of computer technology, its `logical malleability', leads to ever greater levels of complexity, unreliability ...
Human Perceptions on Moral Responsibility of AI: A Case Study in AI-Assisted Bail Decision-Making
CHI '21: Proceedings of the 2021 CHI Conference on Human Factors in Computing SystemsHow to attribute responsibility for autonomous artificial intelligence (AI) systems’ actions has been widely debated across the humanities and social science disciplines. This work presents two experiments (N=200 each) that measure people’s perceptions ...
Comments