Accepted after three revisions by Christine Legner.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1 Introduction
During its Dreamforce 2024 event, Salesforce presented an artificial intelligence (AI) agent that it built in partnership with high-end clothing retailer Saks Fifth Avenue. While seamlessly conversing with a customer, the AI agent helped change a sweater order to a different size. Upon being informed that the product would not arrive on time, the AI agent switched the item from delivery to same-day in-store pickup at a location that the customer agreed was convenient. Beyond the AI agent’s natural language understanding and generation, the demonstrated service interaction was made possible by real-time data access (e.g., status of orders, in-store availability) and action triggering (e.g., switch delivery method). Recent similar examples abound, ranging from food ordering, travel booking, or presentation creation based on word processor textual content (Wang et al. 2024; Zhang et al. 2024). These solutions belong to a class of agentic AI systems architected around Large Action Models (LAM) – a specialized class of generative AI models geared toward the completion of activities.
AI progress was recently accelerated by the rise to prominence of foundation models, “large-scale AI model[s] that are pre-trained on vast amounts of general data and that can be adapted for downstream applications” (Schneider et al. 2024, p. 1). Their defining characteristic is their emergent capabilities, which make them useful in a diverse range of tasks and domains for which they were not explicitly a-priori designed (Schneider et al. 2024). The most visible expression of foundation models is in generative AI (Feuerriegel et al. 2024), which received mass notoriety with the launch of ChatGPT in November 2022. By adding a text interface to an underlying Large Language Model (LLM), ChatGPT demonstrated to non-specialists the ability of generative AI to produce meaningful content in response to text prompts.
Anzeige
Like other generative AI models, LAMs leverage the transformer architecture (Vaswani et al. 2017) and are trained over large corpora of data spanning multiple modalities (Brohan et al. 2023; Durante et al. 2024; Wang et al. 2024; Zhang et al. 2024). Their distinctive characteristic is that they are optimized for task performance and action in the real world. LAMs can perform function calls to complete activities by interacting with existing APIs or infer and mimic human behavior on computer applications by modeling the structure of software programs and data repositories that were originally created for human end-users – as in the case of the recently released OpenAI Operator (OpenAI 2025b). When integrated as part of agentic AI systems, LAMs execute inference that powers the agent’s capability to autonomously manage complex, multi-step processes by making decisions, interacting with existing applications, and adhering to user-defined constraints.
The opening example shows how the Saks Fifth Avenue AI agent autonomously handled the customer interaction, drawing on the LAM’s capability to interpret and process the customer’s request to exchange a clothing item due to size-related concerns (initial task). Based on the customer’s further request for next-day delivery, the LAM determined the need to update the firm’s ordering system and issue a return request for the original item (task-plan). The plan was converted into specific action sequences capable of directly interacting with inventory and delivery APIs (task-action). With the expected delivery time not within the customer's expressed need, the LAM prompted a search for other delivery options (defining alternative task-plans). Upon collecting further information, with prior knowledge of customer location and verification of availability, the LAM switched the order to pick-up (alternative plan execution). Had a suitable fulfillment option or sweater size not been available, the LAM would have triggered a cancellation of the original order, the printing of a return label, and the initiation of a refund (fallback strategy).
Within agentic AI systems, a critical role of LAMs is their ability to integrate software applications, algorithms, and data that were not explicitly architected for orchestration (Piccoli et al. 2022) or as headless1 systems (You et al. 2025). As such, LAMs represent the latest, and most powerful, in a series of technologies devoted to programmatic orchestration. In this context, programmatic refers to the ability to access and use IT and digital resources (e.g., software applications, databases, cloud services) exclusively via software programs – without human intervention. Orchestration refers to the purposeful assembly of elements and components into a designed artifact, such as the Saks Fifth Avenue AI customer service agent, stemming from meaningful integration of the functionalities of IT or digital resources into a cohesive value proposition (Piccoli et al. 2022). Early examples of programmatic orchestration are automation frameworks based on simple trigger-action workflows, such as IFTTT or Zapier, Business Process Management Systems (BPMS) (Dumas et al. 2023), Robotic Process Automation (RPA) (Lacity and Willcocks 2021), and the MACH architecture2 underlying the so-called composable enterprise (Yefim et al. 2021). Unlike previous programmatic orchestration approaches, LAMs leverage the emergent capabilities and adaptability that characterize foundation models (Schneider et al. 2024), rather than relying on rigid, predefined workflows and hard-coded integrations.3
LAMs’ potential for impact is significant because they unlock the programmatic orchestration of IT and digital resources, as apparent in agentic AI systems. Through such orchestration, LAMs enable unprecedented reuse and recombination of resources in digital products and new business models (Henfridsson et al. 2018; Piccoli et al. 2022). On the other hand, they introduce a range of unresolved challenges related to strategy and competition; have potential impact at individual, organizational, and societal levels; and pose ethics and regulation implications. While corporate research labs (e.g., Microsoft, Salesforce) have spearheaded the development and dissemination of LAM-related insights, academic engagement with the design, use, and implications of LAMs remains nascent. In response, this study seeks to conceptualize LAMs from an academic perspective and provide early theoretical grounding to guide future information systems (IS) research on this emerging phenomenon.
Anzeige
In the next section, we introduce LAMs and describe their technological underpinning. We then discuss programmatic orchestration as enabled by LAMs. We conclude by identifying challenges and research opportunities that Business and Information Systems Engineering (BISE) scholars are best positioned to address.
2 Large Action Models: Characteristics and Uses
The BISE community defines LLMs as “neural networks for modeling and generating text data” (Feuerriegel et al. 2024, p. 114), thus highlighting the emphasis of these generative AI models on text processing. Like LLMs, LAMs are a class of generative AI models grounded in complex neural network architectures – most prominently the transformer framework. However, they are designed and optimized for goal-directed task execution across real-world domains rather than content generation. LAMs belong to a broader ecosystem of generative AI model classes, each tailored to distinct modalities. Notable examples of such generative AI model classes include LLMs (Vaswani et al. 2017), Large Vision Models (LVMs) (Oquab et al. 2024), World Foundation Models (NVIDIA 2025b) and Audio Foundation Models (Yang et al. 2023), each of which seeks state-of-the-art performance within its respective domain – be it text, images, audio, or video. Thus, what distinguishes LAMs from their counterparts is their proficiency in transforming abstract, high-level cross-modal intentions into structured executable plans and actions that facilitate seamless interaction with external systems (Wang et al. 2024).
Generative AI models are grounded in a pre-trained foundation model (Schneider et al. 2024) and subsequently fine-tuned through post-training expressly designed to optimize it for specific tasks (Feuerriegel et al. 2024). In this phase, specific capabilities are layered onto the pre-trained foundation model (Ouyang et al. 2022; Wang et al. 2024), thus reflecting both the architectural underpinnings and the optimization priorities that guide model development. For instance, to create the InstructGPT generative AI model, OpenAI researchers post-trained the GPT3 foundation model to behave as a “helpful, honest, and harmless” assistant (Ouyang et al. 2022, p. 2).
The development and training of LAMs involve a series of labor-intensive processes, each requiring specialized expertise in areas like data engineering, model training, and optimization. Ultimately, LAMs exhibit distinctive capabilities due to their action-focused model training and fine-tuning as well as their adaptive planning and action triggering. As such, their unique characteristics and emergent capabilities warrant a specific conceptual label that differentiates them from other categories of generative AI model (see Table 1). They also position LAMs as critical enablers of agentic AI systems integration.
Table 1
Definition and explanation of key concepts
Concept
Definition
Explanation
Examples
Foundation model
“Large-scale AI model[s] that are pre-trained on vast amounts of general data and that can be adapted for downstream applications” (Schneider et al. 2024, p. 1)
A foundation model, also called a base model, is a token generator that probabilistically determines the next token given a sequence
Mistral Pixtral-12B-Base-2409
Meta Llama-3.1-70B
Generative AI model
“Generative modeling that is instantiated with a machine learning architecture (e.g., a deep neural network) and, therefore, can create new data samples based on learned patterns” (Feuerriegel et al. 2024, p. 112)
While a foundation model is, strictly speaking, a generative AI model, we reserve the term for foundation models that have been post-trained for optimal token generation for specific tasks
OpenAI Sora
Meta audioGen
Large language model
(LLM)
“Neural networks for modeling and generating text data” (Feuerriegel et al. 2024, p. 114)
LLMs are the first class of generative AI models to reach notoriety. As such, the term is sometimes used as a synonym for generative AI model. In their original connotation, LLMs focus only on text generation
Meta Llama-3.1-8B-Instruct
OpenAI GPT-4
Large action model
(LAM)
A class of generative AI models designed and optimized for goal-directed task execution across complex, real-world domains
While often multimodal in their input and interaction capabilities, LAMs are distinctively designed to perform actions and execute tasks within specific contexts or operational settings
Salesforce xLAM-8 × 22b-r
Openvla-7b-finetuned-libero-10
Programmatic orchestration
The ability to purposefully assemble IT and digital resources exclusively via software programs into designed artifacts, providing a cohesive value proposition
Programmatic orchestration, grounded in cloud-first development, has increasingly gained popularity with the emergence of digital resources, such as Stripe (payment) and Twilio (communication)
Instacart grocery delivery
Uber ride hailing
Agentic AI system
A system that embeds a LAM as a core inference engine alongside specialized components (e.g., retrieval tools, action executors) that enable it to perceive multi-modal inputs, generate executable task plans, interact with interfaces or APIs, and autonomously execute actions in dynamic environments
Generative AI systems are, by definition, characterized by their embedding generative AI models (Feuerriegel et al. 2024). Agentic AI systems are further distinguished by their role in instantiating autonomous agents that can perform actions in the real world on behalf of users
Agentforce by Salesforce (powered by the firm’s family of LAMs called xLAM)
2.1 Action-Focused Model Pre-Training and Fine-Tuning
As with any generative AI model, LAMs are typically instantiated through pre-training and post-training. A design decision for model architects is whether to pre-train a specialized foundation model or focus only on the specialized post-training of a generic foundation model (Durante et al. 2024; Q. Huang et al. 2024; Wang et al. 2024). The creation of specialized foundation models requires pre-training on multimodal data that are intentionally enriched with action-relevant data sources, including domain-specific code data, structured event logs, and task-execution datasets. Thus, action-oriented token sequences are parametrized directly into the foundation model (Brohan et al. 2023; Durante et al. 2024; Zhang et al. 2024). The main advantage of this design lies in the integration of action as a core modality within the model’s training data and architecture. Alternatively, a LAM can be derived from existing foundation models (e.g., Meta Llama 3 70B) via targeted post-training interventions, including instruction tuning, reinforcement learning from human or expert feedback, and the integration of environment-specific code or operational parameters (Wang et al. 2024). In such cases, the post-training effort focuses on embedding task planning, execution, and action optimization in the LAM (Q. Huang et al. 2024).
Pre-training a foundation model with action-oriented representations (i.e., tokens) enables robust grounding, allowing the model to directly map multimodal inputs to executable plans and actions using specialized and curated action tokens (Brohan et al. 2023; Durante et al. 2024; Q. Huang et al. 2024). As a result, the model can transition from abstract reasoning to situated control without relying on intermediate modalities, such as natural language, that are not inherently optimized for the complexities of action planning and execution in digital or physical environments. Foundation models of this kind can treat text, visual data, and actions within a unified framework as a core design, using pre-training data collected from interactive tasks like robotics or human computer interaction (Durante et al. 2024).
The training dataset for LAMs, whether used for pre-training or post-training, typically includes vast amounts of task descriptions synthesized from appropriate sources, such as system documentation, open-source datasets, LLMs, search engines, and logs from robotic or digital systems (Durante et al. 2024; Jia et al. 2024; Wang et al. 2024). These sources provide insights into unconventional user-environment interaction patterns, alternative action sequences, error scenarios, and recovery strategies. The objective, as with data-centric artificial intelligence systems (Jakubik et al. 2024), is to produce a training dataset of coherent task-plan and task-action pairs tailored to the target environment.4
Like other generative AI models, LAMs working in dynamic environments with frequent, unstructured changes face challenges that require fine-tuning to maximize performance. Continuous refinement using approaches like incremental learning, meta-learning (Hutter et al. 2019), and hybrid systems combining automated and manual adjustments help improve model operations (OpenAI 2025a). For instance, LAMs may employ imitation learning to replicate sample user interactions within the environment while incorporating performance feedback. By analyzing successful and failed attempts for specific task-plans’ executions, LAMs evolve and optimize their tactics, reinforce their ability to adapt, complete new jobs, and handle errors. This iterative process, known as self-boosting, enhances accuracy, speed, and adaptability of LAMs for specific action-related tasks (OpenAI 2025a; Schmied et al. 2024).
2.2 Adaptive Planning and Action Triggering
LAMs are optimized to translate intents into task-plan and task-action outputs. When a LAM receives an intent, it generates a high-level plan for task completion, breaking it into sub-tasks or steps if necessary. These steps are structured logically to achieve the desired outcome and are grounded in the context, or action space, where the task will be executed. Each sub-task from the high-level plan is converted into specific, actionable instructions (i.e., action triggers). Instructions are typically in the form of executable code and can include API calls, GUI interactions, or robotic commands.
The result of the translation process is a formal representation of high-level task-plans as low-level task-actions using a universal standard for information exchange, such as JSON (Ma et al. 2024; Wang et al. 2024). For instance, a task-plan pair for a website might be converted into an automation script that triggers specific UI interactions (OpenAI 2025a; You et al. 2025), such as clicking, scrolling, or selecting options, alongside function calls to handle backend operations. For actions to be executed in a physical space (e.g., humanoid robots), the LAM uses dedicated representations, such as hardware description languages (HDLs), to generate task-action sequences. HDLs bridge the gap between abstract task-planning and precise hardware control, enabling the execution of actions in the physical world (NVIDIA 2025a; Shah et al. 2023). In domain-specific applications like digital gaming, actions are often encoded using environment-specific tokens that map directly to executable commands, such as initiating an attack or activating a defense mechanism in a videogame (Durante et al. 2024).
Given their distinctive architectural features and action-orientation, LAMs are increasingly recognized as a separate and emerging category within the broader landscape of generative AI models (Wang et al. 2024). While fine-tuned LLMs and Large Multimodal Models (LMMs) can execute goal-directed tasks, their practical deployment is constrained by inherent limitations associated with their native modalities as well as by significant computational demands and operational costs (Wang et al. 2024). Empirical evidence underscores these challenges, showing that although LMMs can demonstrate high task completion rates, they frequently encounter performance bottlenecks that lead to inefficiencies in execution (Wang et al. 2024). For text-only LLMs, empirical evidence shows that they tend to underperform in task-completion rates relative to specialized LAMs in task-oriented scenarios (Wang et al. 2024; Zhang et al. 2024). Such limitations become especially salient in resource-constrained environments – such as personal laptops, edge computing platforms, robotics systems, Internet of Things (IoT) devices, and smartphones – where computational efficiency and real-time responsiveness are critical. This growing recognition of the architectural and operational advantages of LAMs has motivated substantial investment from leading industry actors, including Microsoft, Salesforce, NVIDIA, and SAP.
LAMs are typically designed to process multimodal inputs – encompassing textual, visual, and environmental sensor data – thereby supporting a broader range of applications in complex environments. In response to this increasing multimodality, scholars have begun to adopt more specific terminology to reflect the integrated nature of these systems, referring to them, for instance, as Vision-Language-Action Models (Kim et al. 2024). Despite such terminological modifications, the central objective of action models remains that of enabling coherent, goal-directed behavior by generating structured, executable plans and actions that facilitate seamless interaction with external systems and environments.
2.3 Agentic AI Systems Integration
Once trained and fine-tuned, the practical utility of LAMs is realized through their integration into agentic AI systems that collect data, reason, plan, act, and adapt. Agents, the specific instantiations of the agentic AI system, incorporate multiple components (Fig. 1), with the LAM acting as the core inference engine for understanding intent, generating task plans, and orchestrating action triggering through function calling or GUI interactions. An agentic AI system includes specialized components to collect input data (e.g., sensors, databases) for the LAM. For instance, agents are often equipped with multi-modal attention mechanisms to process and integrate information expressed in diverse modalities (e.g., text, images, audio, video). Agents also include interfaces that collect actionable information about the environment – such as UI element names, API functions, and expected arguments – and pass this information to the LAM for action sequence generation. Retrieval-Augmented Generation (RAG) is an example technique for accessing proprietary data sources. Downstream task orchestration by the LAM relies on an action executor (Fig. 1), which operationalizes the action plans produced by the LAM (e.g., process a refund). The action executor enables LAMs to interface effectively with their action space (OpenAI 2025b). In its absence, the actions generated by LAMs would remain latent (i.e., triggers without execution), rendering the agentic AI system incapable of interacting with the target environment. As an example, a browser automation tool like Selenium WebDriver is instrumental in LAMs’ programmatic orchestration of web applications (García 2022). The agentic system also incorporates temporal memory to maintain state data critical for accurate plan execution by the LAM.
Fig. 1
Agentic AI system based on LAM Architectural diagram
Operating in dynamic real-world environments, LAMs must handle frequent, unstructured changes in IT and digital resources, thus needing to adapt to evolving interfaces. Adaptation requires continuous model refinement, leveraging techniques like incremental learning, meta-learning (Hutter et al. 2019), and hybrid systems that combine automated adjustments with human interventions to enhance operational viability (Hutter et al. 2019). A particularly important mechanism for adaptation is reinforcement learning from human feedback (RLHF) (Christiano et al. 2017), which enables agentic AI systems to evolve based on ongoing user interactions (OpenAI 2025a). Over time, the LAM can integrate corrections, thereby improving its resilience and robustness to further interface change. Moreover, centralized architectures, as opposed to local agentic systems, amplify these benefits by propagating learned adjustments across all users, fostering collective intelligence and system-wide efficiency. Another critical mechanism underpinning the adaptability of agentic AI systems is exploratory learning wherein models proactively engage in self-improvement (Wang et al. 2024). By autonomously investigating and interacting with IT and digital resources, such systems can assess their capacity to execute tasks effectively. This process involves systematically experimenting with multiple action plans, evaluating outcomes, and refining strategies to optimize performance, thus limiting harm to users. Through such adaptive strategies, agentic AI systems enhance their ability to navigate complex and evolving real-world environments.
3 Large Action Models’ Role in Programmatic Orchestration
LAMs offer a novel architecture for programmatic orchestration. Orchestration is the purposeful assembly of elements and components into a designed artifact. Programmatic orchestration occurs when the assembly, enabled by the ability to meaningfully integrate the functionalities of IT and digital resources into a cohesive value proposition or digital strategic initiative (Piccoli et al. 2022), occurs at run time without human intervention. Classic examples of initiatives architected as orchestrations of IT and digital resources are ride hailing or grocery delivery services like Uber and Instacart (Li et al. 2022).
As a class of generative AI models, LAMs elegantly manage ambiguous tasks by inferring human intentions and adapting to unexpected use cases or task patterns. At the same time, with their ability to navigate human-centric interfaces (e.g., GUI), they expand the pool of resources available for orchestration via action triggering beyond the limited set of digital resources expressly architected for programmatic orchestration. Being trained on data from human activity using computer applications, LAMs match, or even exceed, the wide-range applicability of existing architectures and approaches. LAMs can operate IT assets (i.e., software programs, databases) that are not intentionally exposed by their designer to programmatic orchestration. Examples abound, including mainstream websites (e.g., Airbnb.com), apps (e.g., Uber), and internal organizational resources (e.g., customer database). As part of agentic AI systems, LAMs can trigger the execution of tasks without having to deterministically specify the workflow and resource needed. LAMs dynamically adapt the orchestration of software applications necessary to effectively address multistep tasks. Unlike previous approaches to programmatic orchestration, LAMs deal with uncertainty and changing circumstances, enabling reliable integration of an expansive array of digital resources, databases, and end-user solutions.5
An established approach to programmatic orchestration of IT resources is RPA, which is a class of software programs used to connect traditional IT resources, such as legacy software applications or information repositories (Wade and Hulland 2004), into automated workflows. RPA automates “tasks that have clearly defined rules for processing structured data to produce deterministic outcomes” (Lacity and Willcocks 2021, p. 170). The primary advantage of RPA is that it can theoretically be deployed on any software application because it appears to the application as if it were a human user with a login ID and password.Thus, any computing task that a human would perform is exposed to RPA execution. However, due to the complexity of creating automated workflows and their lack of robustness in the face of application changes (e.g., redesign or modifications of the user interface), RPA solutions are generally deployed to automate basic “swivel chair” chores performed by data entry clerks (e.g., form processing to update ERP systems). The RPA approach is thus limited by the need for rigid, ad-hoc step-by-step automations and by its inability to address uncertainty or ambiguity in the workflow and intended outcomes (Lacity and Willcocks 2021).
A potentially more powerful and scalable approach to programmatic orchestration entails the use of digital resources, a specific class of digital objects that a) are modular; b) encapsulate objects of value, assets, and/or capabilities; and c) are accessible by way of a programmatic interface (Piccoli et al. 2022). BPMS and Business Process Orchestration tools that access functionality via APIs to automate processes using rule-based algorithms (e.g., Flowable) leverage digital resources. Digital resources have unique structural characteristics. First, they are a type of digital object (Faulkner and Runde 2019) that can configure as nonmaterial or as hybrid digital objects encapsulated by a programmatic bitstring interface.6 Second, they abstract an organizational object of value, either an organizational asset or capability, recognizable as an organizational resource by a business user (Yefim et al. 2021). Digital assets are a subclass of digital resources that encapsulate either nonmaterial digital objects, such as a catalog of digital songs, or hybrid digital objects, such as a virtualized computing resource (e.g., AWS S3) or a smart device (e.g., Amazon’s Echo). Digital capabilities are repeatable patterns of organizational actions (Helfat and Raubitschek 2018) that yield the capacity to undertake activities that a firm can access programmatically through a digital interface (e.g., Stripe Payment).7 Digital resources are structured as modular components and enforce information hiding (Parnas 1972), whether they are internally modular or not (Piccoli et al. 2022). As software-abstracted modules, they are systematically reusable and re-combinable by external orchestrators strictly through their programmatic interface rather than a manual or physical interface like traditional IT assets (Piccoli et al. 2022).8 Moreover, just like for MACH architecture, the orchestrator uses digital resources’ well-documented APIs for programmatic access and writes custom code to deliberately assemble disparate resources into a cohesive value proposition. Thus, programmatic orchestration with digital resources can go beyond the ad-hoc, step-by-step automations enabled by RPA. Nevertheless, a key limitation lies in the complexity of the required custom orchestration software. This complexity demands significant technical expertise, potentially constraining broader adoption and scalability within organizations.
While underpinning the technical and financial success of platforms over the last decade (S. G. Benzell et al. 2024), programmatic orchestration through digital resources is restricted to assets and capabilities purposely exposed through their programmatic bitstring interface by the resource owner. Programmatic orchestration architectures that leverage LAMs not only lower the barrier to widespread resource reuse and recombination but also, importantly, they enable the programmatic orchestration of IT assets or applications even when their developers and owners did not explicitly allow for programmatic access by publishing an API or other structured programmatic interface (OpenAI 2025a). This unique feature of LAMs sets the stage for a disruption to the current status quo, a form of consent-less orchestration enabling value co-creation without volition. The owners of commercially successful agents, acting on behalf of customers, would be able to leverage resources they do not own or control. The owners, in turn, would not have a technical solution for denying access. This disruption is similar to the emergence in the late 90 s of screen scraping software that enabled consolidation of individual financial information across institutions (Evans and Wurster 1997; Yodlee 2017). Another historical parallel form of value co-creation without volition is that of the Google search engine. As Google aggregated consumer search queries over time, content providers (e.g., newspapers) were forced to optimize their material for searches. The use of the robots.txt file to accommodate Google’s web crawlers is a tangible example of such compliance. As LAMs gain popularity, we expect programmatic orchestration and consent-less programmatic resource utilization to create both challenges and opportunities for organizations. An early example is NLWeb, recently introduced by Microsoft, to facilitate exposure of websites and digital resources to orchestration by agentic AI systems. NLWeb is compliant with Anthropic’s Model Context Protocol (MCP), an open protocol that standardizes how applications provide context to generative AI models such as LLMs and LAMs.
4 Challenges and Future Research Directions
LAMs require close research scrutiny given their role at the core of agentic AI systems and the acceleration they may impress on programmatic orchestration. While major corporations and private research labs have been pushing the development and implementation of LAMs, the BISE community is best positioned to investigate critical questions of strategy and competition; individual, organizational, and societal impact; as well as the ethical and regulatory implications of agentic AI systems.
4.1 Strategy and Competition
Because of their capacity to interface with any IT resource, data repository, or application that has a GUI, LAMs enable programmatic orchestration on an unprecedented scale. Unlike orchestration of digital resources, where a programmatic interface designed and controlled by the resource creator exposes specific functionalities to function calling, LAMs may enable programmatic access to any of the functionalities the application or database exposes to credentialed (human) users. What is the impact of co-creation without volition practices on individual organizations and the competitive environment? The accepted definition of value co-creation implies the willing contribution of resources by customers and value network partners (Kohli and Grover 2008; Vargo and Lusch 2008), such that organizations involved in value co-creation engage in “robust collaborative relationships” (Kohli and Grover 2008, p. 28). With LAMs, only user permission is required for credentialing with the target resources. While many organizations have implemented proactive measures to protect their resources by adopting the Robots Exclusion Protocol (REP) via the use of robots.txt files, the effectiveness of such measures is contingent upon the good faith and voluntary compliance of orchestrators. Thus, the resource owner has limited technical mechanisms to prevent programmatic orchestration. These include approaches like CAPTCHA challenges, device fingerprinting, and, more recently, the enforcement of robots.txt policies at the network level. Importantly, despite technical obstacles, strategic pressure stemming from customer adoption may force resource owners to comply with agentic AI systems’ access requirements. Co-creation without volition raises several strategic questions for technology companies and for any organization service organization (e.g., Delta Airlines). Are there governance mechanisms that enable the resource creator to maintain control over the type of access to its resources? Such governance mechanisms must be scalable (i.e., enacted through software) and acceptable to users who want to enable or deny agents the ability to act on their behalf.
Commoditization is the greatest strategic challenge. Recent academic literature documents the growth potential of inverted firms (S. Benzell et al. 2022): digital resource creators whose services are programmatically orchestrated by complementors. These organizations benefit from data-driven operations, instant release practices, and swift transformation skills (J. Huang et al. 2017). LAMs may threaten their business models. In the financial screen scraping banking scenario discussed earlier, the emergence of aggregators like Plaid, Mint, and Yodlee contributed to the commoditization of incumbent banks (Ayers and Bhattacharyya 2021). Substantial academic literature demonstrates how aggregators of demand and digital marketplaces commoditize suppliers (Qiu et al. 2017; Täuscher and Laudien 2018). It is unclear what kinds of barriers to erosion (Piccoli and Ives 2005) are available to IT assets and digital resources owners when programmatic orchestration via LAMs scales.
4.2 Individual, Organizational, and Societal Impact
At the individual level, agentic AI systems pose several challenges and risks. As models improve, they are likely to automate tasks traditionally performed by humans, leading to displacement in sectors like manufacturing, logistics, and customer service. Considering the current path of technological progression, it is likely that such systems will replace routine and middle-skill jobs, potentially exacerbating the divide between high-skill, high-paying jobs and low-skill, low-paying ones. Recent research highlights that 9% of jobs in OECD countries are at high risk of automation (Arntz et al. 2016), with this figure rising to 56% when considering total employment within the ASEAN-5 region (Orozco et al. 2017). What are the policy interventions needed to manage this transition? To what extent can reskilling and upskilling initiatives effectively offset job losses? The European Parliament previously debated and rejected the implementation of a robot tax, a mechanism designed to tax services provided by robots or their maintenance. While economists will play a role here, the BISE community must contribute rigorous estimation and measurement approaches to evaluate and forecast job displacement trends. It must also help envision new professions and design sociotechnical work environments and their management (Constantiou et al. 2023).
The decision-making capabilities of agentic AI systems may impact individual autonomy, critical thinking, and social engagement. The impact of social media in shaping opinions, beliefs, and behaviors is now coming into focus (Van Alstyne 2024). Agentic AI systems could further erode human agency in decision-making as well as intermediate human access to and engagement with sources of information and news. Research is needed to proactively understand how agentic AI systems influence individual autonomy and critical thinking, including the extent to which they may shape opinions and behaviors. As agentic AI systems supersede simple chatbot functionalities and increasingly perform actions in the real world, they may exacerbate the global rise in emotional isolation and loneliness (Gallup 2024). How does the use of agentic AI systems contribute to increased emotional isolation and loneliness? Are there agentic AI systems design principles that promote social connection and activity?
At the organizational level, the impact of LAMs, and programmatic orchestration more generally, may require organizational design and internal structure adjustment, providing fertile ground for future inquiry by scholars in the sociotechnical tradition. Firms engage in programmatic orchestration, seeking to increase their agility, responsiveness, and speed to market (Kaganer et al. 2023; Piccoli et al. 2024). When embracing a modular business architecture grounded in digital resources, these firms have been shown to reduce complexity and improve coordination (Greeven et al. 2021). How do LAMs contribute to agility? What types of managerial and governance mechanisms serve as sociotechnical enablers for increased programmatic orchestration of a firm’s internal resources?
More radically, recent work has conceptualized the intriguing notion of digital enactment (Constantiou et al. 2023). As organizations digitally transform, they gravitate away from the traditional conceptualization of organizations as human interpretation systems (Daft and Weick 1984), showing “a progressive replacement of humans by digital technologies in performing an organization’s fundamental activities underpinning the processes of scanning, interpretation, and learning that encompass an organization’s interaction with its environment” (Constantiou et al. 2023, p. 1770). In this view, the very nature of management is called into question. High Frequency Trading (HFT) has offered an early test-case where programmatically orchestrated agents access resources (e.g., news, trade data, company financials), interpret information (e.g., generate risk profiles) and complete tasks autonomously (e.g., trade stock). While HFT is a unique industry, where nanosecond decision delays matter and transactions are fully digitized, there are growing examples of digital enactment enabled by programmatic orchestration. Global scale digital marketplaces, such as Facebook or YouTube, enable advertisers (e.g., mobile game designers) to simply specify a budget and a goal (e.g., number of game downloads). The digital marketing activities (e.g., data collection, cohort target selection, impression management), and increasingly the AI-generated creative design of the ads, are digitally enacted by cooperating algorithms. As LAMs continue to evolve in sophistication and responsiveness, what industries are most likely to experience change first? How should organizational design adapt to this degree of programmatic orchestration? How does the role (and meaning) of management change in this context? What competencies become scarce, and therefore more valuable, in organizations?
Finally, agentic AI systems promise significant benefits for society, such as improving labor productivity (Noy and Zhang 2023), addressing skill shortages (Brynjolfsson et al. 2023), and enhancing efficiency in industries like healthcare, education, and financial services (Alvarez and Jurgens 2024). However, their potential to operate autonomously introduces novel risks that demand thorough examination and proactive mitigation. Agentic failures can lead to unintended or harmful outcomes, particularly when these systems are trusted to perform complex tasks in the real world without adequate fail-safe mechanisms. Errors in decision-making, misinterpretation of ambiguous goals, or the systematic and algorithmic exploitation of loopholes can result in significant harm. For example, an agentic AI system managing logistical operations might inadvertently optimize for speed at the expense of workers’ and drivers’ safety. Substantial research is needed to devise frameworks for mapping organizational incentives and technical reward functions in agentic AI systems. Protocols for safety testing and certification with reporting and analysis – such as those created by the International Civil Aviation Organization (ICAO) – are also required. To our knowledge, no such systems are currently in place for agentic AI systems.
Perhaps chief amongst societal concerns is the question of environmental sustainability. Generative AI models, including LAMs, contribute to the growing energy consumption of data centers. While the training of foundational models is notoriously resource-intensive, LAMs task-planning inference contributes to the challenge. Moreover, if agentic AI systems deliver on the promise of useful programmatic orchestration on behalf of consumers, we can expect exponential usage growth. In response to these mounting energy demands, leading technology firms, such as Amazon and Microsoft, are turning to nuclear energy. When contextualized within the broader framework of the ongoing climate crisis, the proliferation of agentic AI systems requires research scrutiny and rigorous analyses of its impact on climate.
4.3 Ethics and Regulation
Agentic AI systems are heralded by large technology organizations (e.g., Salesforce, NVDIA, Microsoft) as a new paradigm of end-user computing. Optimists see agents at the center of a wide range of transactions, from booking travel and stock trading on behalf of consumers to managing complex workflows involving advanced medical robotics and self-driving vehicle fleets (Alvarez and Jurgens 2024). Given the potential widespread applicability of these systems, however, ethical and regulatory concerns present significant open questions. The IS literature demonstrates that AI models can produce systematic deviation from equality in the outputs, so called algorithmic bias (Kordzadeh and Ghasemaghaei 2022). The black box nature of generative AI models compounds the problem due to low algorithmic transparency (Lindebaum et al. 2020). For example, human resource managers deciding on disciplinary action were found to place undue trust in AI algorithms, even when they showed bias (Bartosiak and Modlinski 2022). Since LAMs learn from human–computer interactions, they may inherit or amplify biases present in the training data. Given the potential for harm, extending existing research to the realm of action models that trigger activities in the physical world is critical. The human-like behavior of agents may also blur the lines between human and machine agency, creating ethical ambiguities in accountability (Constantiou et al. 2023; OpenAI 2025a). If agents perform actions in the real world on behalf of customers, who is responsible for the consequences? What legal and ethical frameworks can ensure accountability for systems with autonomous decision-making capabilities? An example of the type of ethical dilemmas that await investigation is the so called “Trolley Problem” as adapted to the context of autonomous driving (Awad et al. 2020). For LAMs, the issue is compounded because, unlike mechanical or electromechanical systems (e.g., a train), the emergent capabilities inherent in agentic AI systems make it difficult to impute “control” in action sequences.
Another important issue is the dark side of programmatic orchestration enabled by LAMs. While proponents point to their unprecedented potential for value creation (Zhang et al. 2024), systematic research is needed to evaluate unintended consequences and malicious behavior. With respect to unintended consequences, we must investigate intellectual property and the contractual implications of large-scale programmatic orchestration. Is consumer permission enough for agentic AI systems to engage in co-creation without volition? When online travel agents emerged in the late 1990 s, the Global Distribution Systems (GDS) that served travel agents experienced exponential growth (Piccoli and Lloyd 2010). This meant a dramatic increase in resource usage (i.e., availability inquiries) without a comparable revenue increase as the intermediaries were only paid for booked reservations. With control over the inventory (i.e., hotel rooms, airline seats), GDS could renegotiate contracts. It is unclear how a similar increase in algorithmic search and resource usage could be managed by organizations whose resources are being programmatically orchestrated by LAMs on behalf of customers. Are there optimal contractual structures that ensure incentive alignment between involved parties? Is regulation the appropriate approach?
With respect to malicious behavior or the use of dark patterns (Kollmer and Eckhardt 2023), the widespread adoption of programmatic orchestration opens a new vector of attack for bad actors. The literature shows how AI increases the sophistication of phishing campaigns by learning patterns in communication to generate tailored spear phishing messages at an unprecedented scale (Renaud et al. 2023). Agentic AI systems that can dynamically generate and execute malicious code and trigger real-world activity add a new dimension. Research is needed to determine whether traditional cybersecurity approaches are sufficient. What specific security challenges are unique to LAMs under adversarial attacks, data poisoning, or model extraction threats (Wallace et al. 2024)? Taxonomies of vulnerabilities, attack vectors, and mitigation strategies that incorporate LAMs distinctive characteristics represent a needed first step. Additionally, protocols must be established to assign liability when agentic systems cause harm, particularly in safety–critical environments (Awad et al. 2018). Addressing this will involve drafting policies for continuous anomaly monitoring, emerging vulnerability detection, improving model explainability and transparency, and ensuring compliance with evolving regulations like GDPR and the AI Act (Floridi and Cowls 2022).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Unsere Produktempfehlungen
WIRTSCHAFTSINFORMATIK
WI – WIRTSCHAFTSINFORMATIK – ist das Kommunikations-, Präsentations- und Diskussionsforum für alle Wirtschaftsinformatiker im deutschsprachigen Raum. Über 30 Herausgeber garantieren das hohe redaktionelle Niveau und den praktischen Nutzen für den Leser.
BISE (Business & Information Systems Engineering) is an international scholarly and double-blind peer-reviewed journal that publishes scientific research on the effective and efficient design and utilization of information systems by individuals, groups, enterprises, and society for the improvement of social welfare.
Texte auf dem Stand der wissenschaftlichen Forschung, für Praktiker verständlich aufbereitet. Diese Idee ist die Basis von „Wirtschaftsinformatik & Management“ kurz WuM. So soll der Wissenstransfer von Universität zu Unternehmen gefördert werden.
The terms headless and software addressable refer to application architecture that separates the presentation layer from the logic and data management layer, resulting in a software program that runs without an inbuilt user interface, with its functionality exposed programmatically via APIs; event-driven architecture; or other programmatic bitstring interface (Faulkner and Runde 2019). The concept is encompassed by the academic notion of digital resources (Piccoli et al. 2022).
MACH stands for Microservices, API-first, Cloud-native and Headless. Composability is the term used by Gartner to refer to applications that abandon the “proprietary technology stack of the ‘monolithic approach’ that [traditional] ERP vendors deliver” (Faith et al. 2020, p. 3) in favor of modular solutions enabling the foundational administrative and operational digital capabilities of the organization to support assembly and reassembly of business processes and application experiences.
As a testament to their potential for massively expanding the scope of programmatic orchestration, machine learning and artificial intelligence are rapidly being incorporated in traditional BPM approaches as AI-enabled BPMS (ABPMS). As with agentic AI systems discussed in here, LAM can serve as essential enablers of the AI-augmented business process management systems articulated in the recent ABPMS research manifesto (Dumas et al. 2023).
The target environment of a LAM is the context or setting in which the model operates, receives sensory input, takes action, and interacts. It is the space where LAMs are designed to perform actions and execute tasks. Examples of environments are operating systems (e.g., iOS), software applications (e.g., Microsoft Word), Graphical User Interfaces (GUIs), or robotic systems (e.g., Spot by Boston Dynamics).
The orchestration of novel resources and the need to adapt to their evolving characteristics may require iterative fine-tuning to optimize the models for specific applications and ensure sustained alignment with the LAMs’ environments.
APIs are specifications in the form of routine definitions, protocols, and tools for programmatic access to application functionalities or data. APIs are the most popular approach, albeit not the only one, for implementing the interface of digital resources.
Note that creating digital resources requires significant foresight and effort. For example, Stripe’s digital payment capabilities stem from relentless attention to design: “We believe that payments [are] a problem rooted in code, not finance. We obsessively seek out elegant, composable abstractions that enable robust, scalable, flexible integrations” (Stripe 2020).
Given their structural characteristics, digital resources are visible and usable exclusively as digital abstractions – irrespective of whether the resource is internally a nonmaterial or hybrid digital object. See Piccoli et al. (2022) for a comprehensive treatment of digital resources and digital strategic initiatives.
Schmied T, Adler T, Patil V, Beck M, Pöppel K, Brandstetter J, Klambauer G, Pascanu R, Hochreiter S (2024) A large recurrent action model: xLSTM enables fast inference for robotics tasks. arXiv Preprint arXiv:2410.22391
Floridi L, Cowls J (2022) A unified framework of five principles for AI in society. In: Machine learning and the city: applications in architecture and urban design, pp 535–545. https://doi.org/10.1002/9781119815075.ch45
Alvarez F, Jurgens J (2024) Navigating the AI frontier: a primer on the evolution and impact of AI agents. World Economic Forum
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Proc Syst 30
Awad E, Dsouza S, Kim R, Schulz J, Henrich J, Shariff A, Bonnefon J-F, Rahwan I (2018) The moral machine experiment. Nature 563(7729):59–64CrossRef
Awad E, Dsouza S, Bonnefon J-F, Shariff A, Rahwan I (2020) Crowdsourcing moral machines. Commun ACM 63(3):48–55CrossRef
Bartosiak ML, Modlinski A (2022) Fired by an algorithm? exploration of conformism with biased intelligent decision support systems in the context of workplace discipline. Career Dev Int 27(6/7):601–615CrossRef
Benzell S, Hersh JS, Van Alstyne MW, Lagarda G (2022) How APIs create growth by inverting the firm (SSRN scholarly paper 3432591). Manag Sci 70(10):7120–7141CrossRef
Benzell SG, Hersh J, Van Alstyne M (2024) How APIs create growth by inverting the firm. Manage Sci 70(10):7120–7141CrossRef
Christiano PF, Leike J, Brown T, Martic M, Legg S, Amodei D (2017) Deep reinforcement learning from human preferences. In: Proceedings of the 31st international conference on neural information processing systems, pp 4302–4310
Constantiou I, Joshi M, Stelmaszak M (2023) Organizations as digital enactment systems: a theory of replacement of humans by digital technologies in organizational scanning, interpretation, and learning. J Assoc Inf Syst 24(6):1770–1798
Daft R, Weick KH (1984) Toward a model of organizations as interpretation systems. Acad Manage Rev 9:284–295CrossRef
Dumas M, Fournier F, Limonad L, Marrella A, Montali M, Rehse J-R, Accorsi R, Calvanese D, De Giacomo G, Fahland D, Gal A, La Rosa M, Völzer H, Weber I (2023) AI-augmented business process management systems: a research manifesto. ACM Trans Manag Inf Syst 14(1):1–19. https://doi.org/10.1145/3576047CrossRef
Durante Z, Sarkar B, Gong R, Taori R, Noda Y, Tang P, Adeli E, Lakshmikanth SK, Schulman K, Milstein A, et al. (2024) An interactive agent foundation model. arXiv Preprint arXiv:2402.05929
Evans PB, Wurster TS (1997) Strategy and the new economics of information. Harv Bus Rev 75(5):70–83
Feuerriegel S, Hartmann J, Janiesch C, Zschech P (2024) Generative AI. Bus Inf Syst Eng 66(1):111–126CrossRef
Renaud K, Warkentin M, Westerman G (2023) From ChatGPT to HackGPT: meeting the cybersecurity threat of generative AI. MIT Sloan Manag Rev, Reprint #64428
García B (2022) Hands-on selenium webdriver with Java. O'Reilly, Sebastopol
Greeven MJ, Yu H, Shan J (2021) Why companies must embrace microservices and modular thinking. MIT Sloan Manag Rev 62(4):1–6
Yefim N, Dennis G, Mark O, Benoit L, Massimo P (2021) Innovation insight for composable modularity of packaged business capabilities (G00441575). Gartner
Kaganer E, Gregory RW, Sarker S (2023) A process for managing digital transformation: an organizational inertia perspective. J Assoc Inf Syst 24(4):1005–1030
Kim MJ, Pertsch K, Karamcheti S, Xiao T, Balakrishna A, Nair S, Rafailov R, Foster E, Lam G, Sanketi P, et al. (2024) Openvla: an open-source vision-language-action model. arXiv Preprint arXiv:2406.09246
Kollmer T, Eckhardt A (2023) Dark patterns: conceptualization and future research directions. Bus Inf Syst Eng 65(2):201–208CrossRef
Kordzadeh N, Ghasemaghaei M (2022) Algorithmic bias: review, synthesis, and future research directions. Eur J Inf Syst 31(3):388–409CrossRef
Lacity M, Willcocks L (2021) Becoming strategic with intelligent automation. MIS Q Exec 20(2):169–182CrossRef
Wang L, Yang F, Zhang C, Lu J, Qian J, He S, Zhao P, Qiao B, Huang R, Qin S, et al. (2024) Large action models: from inception to implementation. arXiv Preprint arXiv:2412.10047
Li TC, Chan YE, Levallet N (2022) How instacart leveraged digital resources for strategic advantage. MIS Q Exec 21(3):5
Lindebaum D, Vesa M, Den Hond F (2020) Insights from “the machine stops” to better understand rational assumptions in algorithmic decision making and its implications for organizations. Acad Manage Rev 45(1):247–263CrossRef
Noy S, Zhang W (2023) Experimental evidence on the productivity effects of generative artificial intelligence. Science 381(6654):187–192CrossRef
Oquab M, Darcet T, Moutakanni T, Vo H, Szafraniec M, Khalidov V, Fernandez P, Haziza D, Massa F, El-Nouby A et al (2024) DINOv2: learning robust visual features without supervision. Trans Mach Learn Res J. https://doi.org/10.48550/arXiv.2304.07193CrossRef
Ouyang L, Wu J, Jiang X, Almeida D, Wainwright C, Mishkin P, Zhang C, Agarwal S, Slama K, Ray A et al (2022) Training language models to follow instructions with human feedback. Adv Neural Inf Process Syst 35:27730–27744
Piccoli G, Ives B (2005) IT-dependent strategic initiatives and sustained competitive advantage: a review and synthesis of the literature. MIS Q 29(4):747–776. https://doi.org/10.2307/25148708CrossRef
Piccoli G, Lloyd R (2010) Strategic impacts of IT-enabled consumer power: insight from internet distribution in the US lodging industry. Inf Manage 47(7–8):333–340CrossRef
Piccoli G, Grover V, Rodriguez J (2024) Digital transformation requires digital resource primacy: clarification and future research directions. J Strateg Inf Syst 33(2):101835CrossRef
Huang Q, Wake N, Sarkar B, Durante Z, Gong R, Taori R, Noda Y, Terzopoulos D, Kuno N, Famoti A, et al. (2024) Position paper: agent AI towards a holistic intelligence. arXiv Preprint arXiv:2403.00833
Qiu Y, Gopal A, Hann I-H (2017) Logic pluralism in mobile platform ecosystems: a study of indie app developers on the iOS app store. Inf Syst Res 28(2):225–249. https://doi.org/10.1287/isre.2016.0664CrossRef
Brynjolfsson E, Frank MR, Mitchell T, Rahwan I, Rock D (2023) Quantifying the distribution of machine learning’s impact on work. Forthcoming
Brohan A, Brown N, Carbajal J, Chebotar Y, Chen X, Choromanski K, Ding T, Driess D, Dubey A, Finn C, et al. (2023) Rt-2: vision-language-action models transfer web knowledge to robotic control. arXiv Preprint arXiv:2307.15818
Schneider J, Meske C, Kuss P (2024) Foundation models: a new paradigm for artificial intelligence. Bus Inf Syst Eng 66(2):221–231CrossRef
Shah D, Osiński B, Levine S, et al. (2023) Lm-nav: robotic navigation with large pre-trained models of language, vision, and action. In: conference on robot learning, pp 492–504
Ma Z, Zhang J, Liu Z, Zhang J, Tan J, Shu M, Niebles JC, Heinecke S, Wang H, Xiong C, et al. (2024) TACO: learning multi-modal action models with synthetic chains-of-thought-and-action. arXiv Preprint arXiv:2412.05479
Täuscher K, Laudien SM (2018) Understanding platform business models: a mixed methods study of marketplaces. Eur Manage J 36(3):319–329CrossRef
Wallace E, Xiao K, Leike R, Weng L, Heidecke J, Beutel A (2024) The instruction hierarchy: training LLMs to prioritize privileged instructions. arXiv Preprint arXiv:2404.13208
Jia X, Blessing D, Jiang X, Reuss M, Donat A, Lioutikov R, Neumann G (2024) Towards diverse behaviors: a benchmark for imitation learning with human demonstrations. arXiv Preprint arXiv:2402.14606
Yang D, Tian J, Tan X, Huang R, Liu S, Chang X, Shi J, Zhao S, Bian J, Wu X, et al. (2023) Uniaudio: an audio foundation model toward universal audio generation. arXiv Preprint arXiv:2310.00704
Van Alstyne M (2024) Free speech versus free ride: navigating the supreme court’s social media paradox. Commun ACM 67(11):29–31CrossRef
Wade M, Hulland J (2004) The resource-based view and information systems research: review, extension, and suggestions for future research. MIS Q 28(1):107–142. https://doi.org/10.2307/25148626CrossRef
Zhang J, Lan T, Zhu M, Liu Z, Hoang T, Kokane S, Yao W, Tan J, Prabhakar A, Chen H, et al. (2024) xLAM: a family of large action models to empower AI agent systems. arXiv Preprint arXiv:2409.03215
You K, Zhang H, Schoop E, Weers F, Swearngin A, Nichols J, Yang Y, Gan Z (2025) Ferret-UI: grounded mobile UI understanding with multimodal LLMs. In: Proceedings of the 18th European conference on computer vision, pp 240–255
Arntz M, Gregory T, Zierahn U (2016) The risk of automation for jobs in OECD countries (No. 189; OECD social, employment and migration working papers). OECD Publishing