1 Introduction
2 The Special Issue
3 What’s Next?
-
AI, CIOs, and Firm Strategy The work of Jöhnk et al. (2021) points to a need to examine strategic issues tied to AI’s deployment in organizations. While we know how CIOs manage strategic and maintenance issues concerning technology, we know little about how they go about marshalling resources necessary to develop and sustain technologies that can fundamentally change how firms make decisions and deliver services. Key lingering questions include: How do CIOs align AI deployment with firm strategy? Will CIOs become more prominent in firm governance as they acquire responsibility for supporting strategic AI decisions? What role will boards of directors play in shaping AI strategy? While AI readiness constitutes a first step towards understanding strategic issues because AI represents a fundamental change in how technology impacts business, we may need to revisit our present understanding of how CIOs shape IT strategy as well as ask if new forms of governance will be required to effectively deploy and maintain AI in organizations.
-
AI, Identity, and Sociotechnical Systems Mirbabaie et al. (2021) build on the growing stream of work on individuals and IT identity and point to a need to progress from considering the individual to considering the identity of groups and the technology itself. While we know individuals possess IT identity, will AI learn to support the quirks and habits of teams? Will it enable the formation of unique identities relative to the systems that support them? If so, how will that change the interaction of not only individuals within groups but also that of groups vis-a-vis the broader sociotechnical context in which they work? Further, as AI grows more sophisticated, autonomous, and capable of double-loop learning, how will the “identity” manifest in the algorithms shape when we interact with them? And integrate them into the firm processes? This work on virtual assistants constitutes only a first step towards a broader understanding of the implications of using AI to support teams and to integrate them with broader sociotechnical systems.
-
AI, Bias, and Data Köchling et al. (2021) underscore that AI is prone to the biases imbued by rules and data provided by their human designers. While the findings illustrate how biases in algorithms and data can result in adverse impacts on different groups of people, their implications are further reaching for designing and understanding AI. This work suggests a need for thoughtful, introspective research that examines how to collect data that accurately depicts the “ground truth” of organizational and social life, as we design but also maintain algorithms targeted at supporting fairness and equity in organizations and society. How can we know data represents the full set of factors relevant to desired outcomes? How can we ensure people trust the AI and do so sufficiently to continue sharing information with it? What role will privacy play? And security? In determining to which extent are people willing to share data needed to train AI and sustain them? How can we train AI to detect biases? Or what about adverse impacts for the people they serve?
-
AI, the Uncanny Valley, and the Singularity Berger et al. (2021) demonstrate that people trust AI that demonstrates a capacity to improve. While much has been made of anthropomorphism and the uncanny valley, there remains a lot to learn about how to design the manner in which we interact with AI, particularly, AI that will soon be able to emulate humans. Their work suggests a need for careful investigation of not only how we present the AI (e.g., the interface) but also of how we educate users about the algorithms that drive the AI, the relationship from data fed into the AI to outcomes and to users’ affective responses to the support offered by AI. How will users respond to increasingly human-like systems? Will a greater understanding of algorithms result in more trust? And information sharing? Or will it undercut beliefs about security and exacerbate fears about privacy? Should designers avoid the uncanny valley or embrace the “singularity”? And should they design systems to be partners with users?
-
AI and Ethics Collectively, these papers suggest a need to actively question the implications of how we build the algorithms, gather data to train, and apply AI to solve problems in organizations and society. These papers hint at the necessity to investigate ethical questions, such as: How does the design of AI change the decisions that we make? Do we fully inform users of the scope of AI application in the sociotechnical systems they live in? Or do we apply AI like electricity – as a utility? And without comment?In our minds, the papers in this special issue evoke questions about user data, its sources, and application. In how far are designers obliged to explain the implications of users’ training data contributions? And what about the potential applications of their data to solving problems? Or enabling decisions in AI-enabled information systems? And given applications of data will certainly evolve, what ethical obligations do organizations and designers have to update contributors about new uses of data they shared? These questions about data are particularly salient as GDPR becomes infused with the intellectual framework of our society – and is not in the culture of other societies.Since we first crafted the call for contributions to the special issue and when reflecting on these papers, we found ourselves asking even bigger questions about ethics and AI that merit attention in future work in BISE and other rigorous academic outlets – what limits should we place on AI in society? Just because we can design systems to make decisions, should we do so? What kind of problems should unsupervised AI be applied to? What kind of decisions should remain at least semi-supervised by humans? As we allocate decisions between humans and AI, what role should ethics play in the allocation of responsibilities between people and machines? How can we ensure that AI is applied in a way that results in pareto efficient outcomes for people and society?