The first editorial of Electronic Markets appeared in September 1991, and now, with the journal’s move to the continuous article publishing model, the present editorial is the last regular editorial. What both editorials have in common is an emphasis on the relevance of standards for networked business. In the first editorial, the evolution of open communication standards was recognized as an enabler of open (electronic) trading systems, which were characterized by four elements: standardized communication channels, standardized market languages, electronic market services and applications (EM, 1991). Although common communication channels and common languages are preconditions for any communication, the communication between computers has specific challenges due to the electronic systems’ limited “intelligence”. This applies to cognitive abilities in flexibly interpreting and contextualizing the content of communicated messages as well as in enhancing the vocabulary and language skills in general through learning. However, with the renaissance of artificial intelligence (AI) technologies, the relationship between standardization and AI becomes an interesting topic for this editorial.

Standards and EDI

The ambivalence of digital communication between computer systems is well known from the era of electronic data interchange (EDI). On the one hand, the electronic exchange of business information promises compelling benefits. In this direction, Brousseau (1994) reports that “Indeed the transmission of electronic information through a telecommunications network is about 10,000 times faster than and one-sixth as expensive as the physical transmission of a paper document by the postal service. Moreover, the manual handling of a paper document by the sender and the receiver is slow, expensive, and generates many mistakes” (p. 320). On the other hand, these potentials rely upon a working interoperability between the sending and the receiving system(s). Defined as “the ability of two or more systems or components to exchange information and to use the information that has been exchanged” (IEEE, 1990, p. 124), interoperability rests on a high degree of formalization. In particular, the data formats of the respective systems have to be aligned to allow bijective mappings (Brousseau, 1994, p. 321). This means that data is not only present in a structured form, but also that these structures adhere to compatible vocabulary regarding syntax (i.e., the grammar of a message) and semantics (i.e., the meaning of data in a message) (see Reimers, 2001; Legner & Vogel, 2008). The efforts to achieve interoperability are often substantial and need to be balanced against the potentials.

Defined as “uniform set[s] of measures, agreements, conditions, or specificications between parties” (Spivak & Brenner, 2001, p. 16), standards are important enablers to reduce the costs and to achieve compatibility among information systems (as well as between humans). By creating a shared understanding and agreed upon definitions, they confine the possible solution space for specific standardization objects. A major challenge rests that multiple “objects” need to be addressed for interoperability. Following the four-layer model of Kubicek and Cimander (2009), standards mainly emerged for technical communication (e.g., as in the ISO/OSI reference model) and syntactical document objects (e.g., with business documents standards such as UN/EDIFACT). While syntactic heterogeneity may be handled with existing converters, the problems of semantic and structural heterogeneity are more difficult to tackle (see Schmidt et al., 2010). In fact, most standards have emerged for various types of business documents (see the list by Kabak & Dogac 2010) and only a smaller number emerged for the higher-layered semantical and the organizational business logic objects. An analysis of 34 available semantical standards revealed that these varied considerably regarding their quality (i.e., in achieving interoperability) (see Folmer et al., 2011) and only recently Ostern (2020) named semantic standardization as one research direction for blockchain-based systems. The limited availability of semantic conventions was referred to as the “organization gap” (Kubicek, 1993) and emphasizes that interoperability for communication and coordination requires comprehensive standardization efforts. Brousseau (1994, p. 333) has compared this with the telephone and stressed that typical EDI standards are rather a set of rules and codification principles than a language technology like the telephone. For electronic coordination to work this requires the standardization of technical as well as organizational and even social aspects, since the context of how messages are intended and understood must be aligned between sender and receiver. Only in the past editorial on platform culture researchers have ascertained that the same message may be interpreted differently depending on the social and cultural context (Alt, 2022b). The need for multiple standards has led to the terms “e-business stack” (Janner et al., 2008), “standard BPM stack” (Hündling & Weske, 2003) or “standards stack” (Sieber & Bloom, 2018) to denote an aligned set of standards. Such stacks may also be industry-specific (e.g., for the healthcare sector, see de Mello et al., 2022) or emerge for complex scenarios (e.g., for smart cities, see Lai et al., 2020). Therefore, it may not come surprising that in view of the myriad of standards that have developed, computer scientist Andrew S. Tanenbaum stated in his seminal work on computer networks that “the nice thing about standards is that there are so many to choose from” (Tanenbaum & Wetherall, 2011, p. 702).

Other explanations for the “standards zoo” besides the objects are three additional dimensions of standardization. For the same object, standards may emerge for different communities and from various standardizing bodies. Community (box 2 in Fig. 1) captures where standards are applied: Standards may be limited to one organization only (e.g., the standard of one public administration) as well as intended to be used by multiple organizations (e.g., the standard of the automotive industry). This scope of application may also differ geographically with national (e.g., US), international (e.g., Europe) and global (e.g., UN, ISO) standards. Often, the community corresponds with the standardizing body (box 3), i.e., the (trusted) organization being in charge of the creation and the development of the standard (Reimers et al., 2019; Loebbecke & Huyskens, 2008). This may be a single organization (e.g., an automotive or IT company), an industry association or consortium (e.g., RosettaNet, GS1) or a public body (e.g., ISO, UN). Depending on the mandate and the body’s authoritative power, the standards might be recommendatory or mandatory in nature. Finally, standards vary depending on the process of standardization (box 4). In the restrictive form, standards are approved as norms following a defined procedure of workgroup meetings and approvals or licensed, like in the case of open source software. For example, ISO foresees defined processes when national standardization bodies from over 160 countries collaborate to agree upon global standards in technical committees along various harmonized stages of development. An important committee for AI is the subcommittee 42 of ISO’s joint technical committee 1 (JTC/SC 42), which to date has involved 36 participating members and published 16 standards on such aspects as terminology, bias and governance and trustworthiness with 25 additional standards being currently under development (ISO 2022). In its looser form, the standardization process is less organized and occurs as an emerging practice. An example are de facto standards (see Belleflamme, 2002), which become an institutionalized practice via widespread adoption, such as the DUNS number for business locations.

Fig. 1
figure 1

Four dimensions of standardization (based on Huber et al., 2000)

Platforms and AI

Since its early days, EDI has been closely associated with the concept of digital platforms. Following the model of middleware systems, their multilateral hub topology facilitated the exchange of electronic documents between many parties. Referred to as clearing centers, these EDI platforms offered messaging, routing and mapping services. Over the years, some clearing centers were enhanced with centralized documentation (e.g., dangerous goods databases), matching (e.g., scheduling and booking) and settlement (e.g., payments and customs) services (see Alt & Zimmermann, 2015). Numerous such platforms are still operational in many industries, for example in most major sea- and airports worldwide, but also for processing documents between actors in the medical sector. Besides document standards, platform providers had a second important role related to standardization, which pertains to harmonizing data from various sources. This activity allowed product descriptions to be included in a centralized catalog or tracking data to be consolidated from systems of various logistic service providers. A third link to standardization were processes and algorithms (e.g., dynamic pricing mechanisms, see Schwind et al., 2008) that ensured the adherence to defined procedures on the platform. For example, listings had to follow neutral and unbiased criteria as had the matching logic of bids in exchanges or auctions. Finally, standardization is relevant for modularity in digital platforms when standardized functional modules increase a platform’s flexibility (Um et al., 2013).

Many aspects on standardization have been researched in prior issues of Electronic Markets. A survey of the Electronic Markets archive yielded 214 papersFootnote 1 that were published related to standards and standardization. As depicted in Table 1, the first ten volumes comprised the most articles, although the latest three volumes show more hits per volume. Besides elaborating on the role of standardization for (open) electronic markets, the articles reveal the variety of standards across various objects as well as industries and national communities. Over time, the focus shifted towards studies on standardization in practice and electronic exchanges as well as towards standardization in combination with new technologies, such as web services, mobile communications and semantic technologies. In addition, more complex application scenarios were observed, such as inherent in ambient assisted living, smart cities, fintech or healthcare. To develop these scenarios and suitable business models, standardized methods and techniques were proposed in the field of business model tooling, which “convey standardized procedures and processes” (Schwarz & Legner, 2020, p. 438) and may be considered as additional standardization objects (see Fig. 1).

Table 1 Papers published on standardization in Electronic Markets

Although AI enjoyed repeated attention only in more recent contributions of Electronic Markets, suggestions to apply semantic technologies date back to the first decade of Electronic Markets. Many of these ideas are associated with the mutual relationship between digital platforms and AI. In a past editorial, digital platforms were regarded as valuable data sources for AI and in turn AI was conceived as a valuable tool to be applied by digital platforms (Alt, 2021). Their role as data sources results from the hub topology, which yields rich data on the activity of all participants. Extracting this data and establishing rich data spaces (e.g., Otto & Jarke, 2019) with preprocessed data has become an important prerequisite for AI. It is key for AI since raw data is converted into meaningful data (i.e., information) only in a specific context (see Ackoff, 1989). As a tool for platforms, AI may contribute to platform processes and drive the automation of (intelligent) transaction processes as well as the performance of recommendation and conversational systems. Contrary to recommendation systems, where the services and their logic have remained largely internal, conversational systems have enjoyed broad attention in virtual assistants such as Amazon’s Alexa, Apple’s Siri or Google’s Assistant. This is not only reflected in rising sales figures of smart speakers and supported devices, but also in substantial investments that have led to exceeded expectations in growth and recent layoffs (De Avila, 2022). On the one hand, these developments suggest that appropriate business models for such AI-based services are still needed. On the other hand, the decay of platform growth might indicate increased platform competition and limits of network effects (McAfee & Oliveau, 2002).

Two views on standardization and AI

The mutual relationship between AI and standardization could positively influence a platform’s competitive position. Again, two views may be distinguished when standardization is regarded as a prerequisite and requirement for AI solutions and when AI is regarded as a technology that improves tasks associated with standardization (see Table 2).

Table 2 Views on standardization and AI

Standardization for AI may be described along the three stages input, modeling and output (Thiebes et al., 2021, p. 456). On the input side, data has a dual role since input data is relevant for training AI models and input data is transferred into output data following trained models. If this data should be processed syntactically and semantically correct, then either preprocessing is necessary to meet a standardized representation or source systems need to deliver data already in a standardized format. Since the latter will be unlikely due to syntactical and semantical variety among systems of different organizations, a standard interior semantic representation of knowledge is regarded as an important feature of intelligent computer systems (Golenkov et al., 2020, p. 6). Being based on a hierarchical system of formal ontologies, such a standard aims to achieve semantic compatibility of various types of knowledge (e.g., facts, algorithms, processes, domain models, ontologies) and to integrate existing standards on AI. As the ISO/IEC JTC1/SC42 standards mentioned above illustrate, conventions are not limited to the uniform representation of data in a knowledge base (e.g., in a graph structure), but they also address rules for collecting and preprocessing data. By determining which data (e.g., personal data) is allowed to be collected under which circumstances (e.g., after opt-in) and how it may be used for expanding the knowledge base (i.e., via training or learning) to avoid privacy intrusion or bias, rules are regarded as an important element for trust in AI. Being transparent in this respect may prove as an asset in the competition among service providers.

On the model side, Thiebes et al. (2021) allude that “AI models are responsible for translating input data into output data” and emphasize that “AI models themselves constitute an important form of data” (p. 457). Standards could benefit by defining which algorithms are applied (e.g., standard vs. contextualized models, see Bawack et al., 2022) and that their behavior is transparent and traceable. While this appears feasible in systems that use rules (e.g., the AI-Trader proposed by Geihs & Farsi, 1997) and algorithms (e.g., advisory, matching or optimization systems), standardizing the behavior of systems based on machine learning (ML) and preventing risks, such as model uncertainty or model bias, remains challenging due to the limited ability to understand the inner functioning of AI models (“model opacity”). However, transparency could be achieved on the construction of the model and how it was developed (Golenkov et al., 2020, p. 14). In this respect, standards could ensure that the model complies with privacy regulations such as the GDPR and, thus, demonstrate that they adhere to a privacy by design approach. Standardized audit trails are also seen as important for trustworthy AI (Avin et al., 2021). In combination with certifications, which are validated by independent external standardization bodies, they could serve as helpful guidance especially in networked settings with many independent providers. The same applies when AI models are offered as products on information marketplaces (see Alt & Zimmermann, 2022) and certifications (e.g., privacy seals) provide orientation for buyers. More insight in this direction is included in the present special issue on trust in AI (Meske et al., 2022) with suggestions for standards to define the quality of explanations (Herm et al., 2022) , for applying legal standards such as GDPR (Dickhaut et al., 2022), for establishing the transparency about data quality standards (Michalke et al., 2022) as well as for standards to collect, process, and use personal data in networked settings (Koester et al., 2022).

Finally, the output side denotes data that is being generated by AI-based systems (Thiebes et al. 2021, p. 458). An important motivation for regulation in the output stage is the safeguarding of copyright issues (e.g., when algorithms “recycle” protected content) as well as the prevention of misuse of data (e.g., in the form of deepfakes) and the prevention of the discrimination of users (e.g., against political views). Standards addressing this category are often high-level in nature and formulated as guidelines and principles being derived from ethical values (see Berente et al., 2021 and the example of the AI Liability Directive (EC, 2022)). For example, certifications could be possible to make sure that textual output is original and has not been artificially produced by systems like ChatGPT. Even less formalized are social standards, which influence how output is interpreted by users with diverse backgrounds and socializations. For example, national cultures as well as the culture of interactions on specific platforms differ substantially and the same output may be interpreted differently. AI systems, such as recommender systems, should cater to this heterogeneity and incorporate cultural factors (or standards) in their customization algorithms (Wan et al., 2022).

AI for standardization is the second view and characterizes the use of AI technology to improve activities in defining and applying standards. An early example are approaches associated with the notion of “new EDI” (Steel, 1994; Lehmann, 1996), which were driven by the idea to reduce the substantial effort involved in negotiating the design and the exchange of electronic documents in the “old EDI” world. In the “new EDI” model, AI was intended to create ontologies from existing EDI terminology and these ontologies would be negotiated directly between the participating systems to establish semantic compatibility. After this electronic onboarding procedure, the messages could then be mapped and processed. Similarly, AI technology has been applied to identify patterns in transaction messages and to propose mappings for metadata, which are then confirmed or modified in the converters. Work in this direction has utilized data and process mining techniques to extract business information as well as to identify events and process instances to derive interorganizational processes and to calculate performance data (Engel et al., 2011, 2016). Using the ontologies and process structures as input data to train AI models that serve to continuously improve the ontologies could then create further benefits in automating the exchange of structured business messages. Another application area is compliance where compliance management systems could include learning skills to check whether data structures or processes in information systems comply with defined standards (e.g., for tax compliance). Recent research has shown that this logic is also applicable to decentralized systems where AI could “provide intelligent sanity checks in smart contracts to automatically identify non-compliance” (Fatz et al., 2019, p. 567) and enable “automated referee and governance mechanisms” (Pandl et al. 2020, p. 4).

In summary, the relationship between standardization and AI has some history already. It shows that AI relies on standardization and that AI alleviates challenges inherent in standardization. If applied on a broader scale, past approaches suggest that AI technologies could help in meeting the intricacies of the business world where the need for companies to differentiate in competition as well as country and industry specifics have driven a large heterogeneity across the four dimensions of standardization. As a matter of fact, instead of striving for an unrealistic uniformity of standards across objects and communities, AI could help in aligning the diversity of existing structures more efficiently and in assuring the reliability of standardized objects by means of compliance checks. It may be expected that these potentials pave the way towards more complex digitalization scenarios that not only include business partners and organizations (and their enterprise systems), but also individuals with their applications (i.e., apps). Digital platforms will adopt a key role in bringing data from diverse smart devices, enterprise systems and digital services together as a basis for valuable services. While this will often apply to structured data, the advances in handling unstructured content (e.g., via text mining) will enhance these scenarios and open additional opportunities (e.g., in customer interaction). In particular, this will be the case, if a comprehensive approach to standardization is undertaken that covers hard (e.g., interfaces, message structures) as well as soft (e.g., guidelines, cultural issues) standardization objects. Certainly, future special issues and research articles in Electronic Markets will contribute in this direction.

Four special issues

This last regular issue of Electronic Markets comprises four special issues with a total of 16 research papers. They may all be linked with the topic of standardization and AI. The first is the special issue on standardization for platform ecosystems and was organized by Geerten van de Kaa, Eric Viardot and Ian P. McCarthy. After Electronic Markets’ special issues on standardization in 2001 (issue 11/4) and 2005 (issue 15/4), it is Electronic Markets’ third special issue on standardization. Its four papers illustrate the forms of standardization and platform ecosystems, which are described in the guest editors’ editorial. To position the articles, they use a framework that differentiates whether ecosystem and/or platform dominant design is present or not (van de Kaa et al., 2022). The second special issue is titled “Smart cities and smart governance models for future cities”. It may be seen as an example of complex digitalization scenarios and denotes a field of application where standardization is highly relevant. This is confirmed in the statement by Eremia et al. (2017), whereas “Taking into consideration the large number of domains associated with the smart city concept, standardization in this domain is a major challenge” (p. 19). The guest editors Ilja Nastjuk, Simon Trang and Elpiniki I. Papageorgiou contributed to this challenge. They have compiled three papers that discuss ways of information exchange and communication between citizens and representatives of the public sector as well as the role of AI in smart government models and in applying human-centered AI for smart cities (Nastjuk et al., 2022).

More research on advances in AI is included in the nine papers of the two special issues on AI. The first relates to the topic of trust in AI and was organized by Roman Lukyanenko, Wolfgang Maass and Veda C. Storey. In their comprehensive preface, the guest editors present an in-depth review of the literature on trust in AI and develop the “Foundational Trust Framework”. With its nine propositions, this framework allows to establish a deeper understanding of the nature of trust in AI and serves to position the three research papers included in the special issue (Lukyanenko et al. 2022). The second special issue comprises research from the minitrack “Explainable artificial intelligence (XAI)” at the Hawaii International Conference on System Sciences and is titled “Explainable and responsible artificial intelligence”. The chairs of this minitrack Christian Meske, Babak Abedin, Mathias Klier and Fethi Rabhi were able to accept six papers after additional reviews for Electronic Markets. These papers relate to the model stage in the first view introduced above (“Standardization for AI”) and present research that advances the understanding of the black box AI operations and discusses perspectives on how these insights could be communicated with users and other stakeholders (Meske et al., 2022).

General research articles

The special issue articles are followed by a large general research section with 16 papers in sum. The first is an overview article that closely connects to the two special issues on AI and discusses the relationship of AI and ML. Following Electronic Markets’ Fundamentals format, the authors Niklas Kühl, Max Schemmer, Marc Goutier and Gerhard Satzger review the relevant literature and propose a framework that serves to structure and categorize the two concepts. Using the example of rational agents, they show that ML is an important element in AI systems, but that AI systems are also possible without ML (e.g., in rule-based systems). This leads to a two-dimensional framework, which determines whether AI-based information systems employ ML or not and whether they are static or adaptive in their learning behavior (Kühl et al., 2022).

The second article relates to a specific aspect of AI, which is the concept of anthropomorphism. It signifies “the attribution of human characteristics to nonhuman beings or entities” and may be observed with voice assistants, chatbots, social robots and autonomous driving systems (Li & Suh, 2022). By conducting a descriptive literature review, the authors Mengjun Li and Ayoung Suh analyze a total of 55 research studies on AI-enabled technologies (AIET) and shed light on the variety of definitions as well as measurements of anthropomorphism in the literature. From their observation that most studies fall short of defining, conceptualizing and measuring anthropomorphism in the AIET context, they formulate 14 recommendations for the operationalization, the antecedents and the consequences of anthropomorphism as well as the appropriate research methods. Being aware that anthropomorphism might lead to positive as well as negative experiences depending on individual differences, the authors propose a framework that links the antecedents with the consequences of anthropomorphism depending on how anthropomorphism is conceptualized and operationalized.

One aspect of anthropomorphism pertains to the social features of chatbots, which behave in an emotional way to conform to the feelings of the human they interact with. In the case of the third paper, the authors Tao Zhang, Chao Feng, Hui Chen and Junjie Xian recognize that little research was available for after-sales situations when customers with negative emotions experience a failure of their product (or service) and aim to receive support in recovering from this problem. In online experiments the authors investigate how the soothing effect may be obtained via two cuteness strategies. On the one hand, the chatbots adopted a whimsical strategy (e.g., via entertaining or amused faces) and a kindchen strategy (e.g., via infantile behavior or baby faces) on the other. The results support the effectiveness of both strategies in soothing angry customers and suggest that the whimsical strategy is better received with male customers as well as customers that are anxious about technology. In turn, the kindchen strategy was more suitable for female customers and customers who are less anxious about technology. Finally, the authors recommend to consider such chatbots in first-level interactions with customers and to forward (less negative) customers to human counterparts only in a second step (Zhang et al., 2022).

The fourth paper shifts the focus towards social media and the privacy on these platforms. Titled “Exploring interdependent privacy – Empirical insights into users’ protection of others’ privacy on online platforms”, the authors Anjuli Franz and Alexander Benlian pursue a multi-level view on sharing personal data that is more realistic than the existing bilateral views. Instead of assuming that data is only shared between an individual and a company (e.g., the social media platform), it recognizes the networked nature of these platforms. For example, information on other users is provided when the contacts within a user’s network are shared or when users provide information on other users. The authors explain that current data protection regulations like GDPR fall short of addressing these problems. For this reason, they propose the introduction of specific privacy nudges that provide information on how which data should be shared and require users to confirm that they have their contacts’ consent to share the data. In their experiment on Instagram, the authors find that the implementation of this opt-in nudge decreased the disclosure of others’ personal information by 62%, which leads them to recommend such measures to regulators and platform operators (Franz & Benlian, 2022).

The question whether information systems comply with legal regulations leads to the fifth general research paper. Using the example of smart personal assistants, the authors Ernestine Dickhaut, Mahei Manhai Li, Andreas Janson and Jan Marco Leimeister investigate how knowledge from the legal profession can be brought together with system developers in the technological domain. To bridge both worlds, design patterns are proposed that embody the legal regulations and make this knowledge accessible for developers. Besides guiding developers, these patterns are instruments to determine the lawfulness of IT artifacts, which also allow external parties to understand the procedure and the details of complex IT artifacts. As argued by the authors, this proves especially helpful for novel technologies like smart personal assistants where, due to their novelty, dedicated legal requirements often emerge only with a time lag and where legal problems, such as privacy breaches, could have been avoided. The usefulness of the proposed approach of lawful system development is shown in a case study based on real-world legal cases and a simulation of legal disputes from user complaints that had to be clarified in court (Dickhaut et al., 2022).

Another novel technology that has been associated with privacy risks is at the heart of the sixth paper. In this case, connected cars are discussed as a representation of the internet of things, which collect a rich amount of car-, driving-, context- and user-related data. On the one hand, this data provides services such as driving style analytics and on the other hand, it creates important privacy risks, which users might perceive differently. From this background, the authors Nils Koester, Patrick Cichy, David Antons and Torsten Oliver Salge aim to understand the determinants, consequences, and contingencies of these perceived risks and their influence on users’ decisions to disclose data. Based on 33 interviews and survey data from 791 car drivers in Germany, they present an overview of the negative consequences that users of connected cars are concerned about. In total, 15 car-related privacy risks are listed and clustered along 7 dimensions together with the associated privacy-invasive practices that occur. Possible measures are proposed for businesses and policymakers, among others, the plea for industry standards for handling privacy data and their demonstration in the form of externally-validated privacy seals (Koester et al., 2022).

One of the factors that determine privacy risks are security breaches. In this case, systems are accessed without authorization by criminals in the intent to manipulate the system’s behavior and/or to obtain data that is then used for fraudulent purposes. The financial consequences may be severe as shown by IBM’s annual report on the cost of data breaches, which calculates the global average total cost of a data breach at 4.35 million USD (IBM, 2022). However, calculating these costs might be difficult since factors such as reputation are not directly amenable to quantification. An approach that has been chosen by many researchers analyzes how the stock market reacts after a company announced the occurrence of a security breach. In the seventh paper of the general research section, the authors Sepideh Ebrahimi and Kamran Eshghi present an overview on 63 of these prior studies that in sum include over 20,000 of such announcements. Their meta-analysis leads to six main findings (e.g., function-related security breaches cause even larger losses than data-related security breaches), which are linked with contributions to practice and existing as well as future research (Ebrahimi & Eshghi, 2022).

Another systematic analysis of prior research is included in the eighth paper for the domain of open government data (OGD). The authors Bernd W. Wirtz, Jan C. Weyerer, Marcel Becker and Wilhelm M. Müller conduct a literature review of 169 empirical research contributions and assert, that the field of OGD still lacks conceptual clarity across the diverse elements that are relevant in highly networked digital economies and ecosystems. Building on their literature analysis and prior research in the fields of open government and open data, they develop a framework for open government data that serves to establish OGD as an independent research stream. This framework features antecedents (i.e., drivers and barriers), decisions (i.e., adoption, use and implementation) and outcomes (i.e., success, performance and value as well as acceptance, satisfaction and trust) as core elements, which are framed by general conceptual development and institutional factors, such as governance and the regulatory setting. Beyond categorizing existing research, the authors conclude that the framework contributes in “showing what issues may be studied and how they are related” (Wirtz et al., 2022).

Research from the government sector is also presented in paper nine. The authors Cheng-Kui Huang, Shin-Horng Chen, Chia-Chen Hu and Ming-Ching Lee analyzed the adoption of mask-supply information platforms (MITP) during the Covid pandemic in Taiwan. In an attempt to provide open data regarding face mask inventories in Taiwanese pharmacies, some 130 of such systems were implemented in a situation of great urgency. The authors conducted a survey among 524 participants in Taiwan to understand the determinants for using these platforms under these conditions. Among their findings was that existing adoption factors, such as ease of use and perceived usefulness, were still valid, but needed to be complemented with perceived threat. They conclude that people are more likely to use digital platforms if they feel that these services (e.g., where to find masks) are helpful in preventing the disease. This comprises additional disease-related information provided via the platform and leads the authors to assume that their observations might provide insights for health information systems in general (Huang et al., 2022).

Another research on user behavior on digital platforms and the success factors of such platforms is presented in paper ten by Simon Michalke, Lisa Lohrenz, Christoph Lattemann and Susanne Robra-Bissantz. By emphasizing that the quality and the convenience of services offered by digital platforms rely on the continuous engagement of their users, they coin the notion of engagement platforms. While large digital platforms, such as the Google Play store or YouTube, are recognized as cross-industry engagement platforms, the authors target platforms for personal services, such as craftsmen, cleaning or childcare services. During their research the authors scrutinized four main activities that they confirmed in interviews with representatives from 14 platform companies in German-speaking countries. These were easing the entry, identifying mutual problems and needs, supporting value co-creation, and facilitating service innovation. In addition, eight governance mechanisms (e.g., certification, transparency about quality standards) and related self-regulatory measures (e.g., moderation of content, use of common standards) were observed that aim to prevent possible harm or negative consequences on well-being and social welfare (Michalke et al., 2022).

The study on engagement platforms  mentioned that the pandemic since 2020 could have influenced user behavior since users could have avoided physical touchpoints, which are regarded as important for personal services. Paper eleven takes on this change in user behavior by analyzing the impacts of the pandemic on the adoption of mobile banking services. Authored by Muhammad Naeem, Wilson Ozuem, Kerry Howell and Silvia Ranfagni, this research investigates the motivations and experiences of users as well as providers from mobile banking services in Pakistan. Based on a data set from 93 online reviews and 40 interviews with customers as well as focus group interviews with 15 bank managers, the authors confirmed that health risks associated with physical interactions in branch offices led many customers to use mobile banking services. Obviously, these risks were perceived higher than the financial risks and the skepticism towards unfamiliar online solutions. A framework with five processes (i.e., material, meaning, competencies, accessibility, context and situation) structures the identified factors for the adoption of mobile banking and provides recommendations for systems developers. In particular, the authors emphasize the role of service and accessibility standards, especially for developing countries where limitations regarding the network and legal infrastructure exist (Naeem et al., 2022).

Another contribution on the use of digital services is presented by Nicole Bulawa and Frank Jacob in paper twelve. It proposes a model that highlights how value in use emerges and differs from existing models which recognize these activities as a sequential process. Instead, value in use is conceived as a circular process consisting of eight activities that range from an initial trigger to the termination of using the service. In between, the process is described as dynamic, which mainly results from the sequential variety emanating from the personalization of service processes and the variations that may occur during a longer use of these services. To understand the individual paths, the authors apply concepts from self-regulation, which conceive the process as an interaction between movements (“locomotion”) and decisions (“assessments”) depending on several dimensions (e.g., goal prioritization, resource suitability, usage intensity). The findings of this qualitative research were derived from 13 interviews in Germany in the domain of language learning applications and led to several suggestions for service designers on their way to becoming more involved in consumers’ lives (Bulawa & Jacobs, 2022).

The thirteenth general research article focuses on a specific aspect of service usage. Titled, “Users taking the blame? How service failure, recovery and robot design affect user attributions and retention”, the authors Nika Meyer, Melanie Schwede, Maik Hammerschmidt and Welf Hermann Weiger conducted two experiments with humanoid robots in medical settings. The question was motivated by the fact that although failures of service robots need to be minimized, these failures are inevitable and impact the customer relationship. In particular, the established self-serving bias posits that successful interactions with service robots are attributed to oneself, but failures committed by the robots are attributed to the firm. From the experiments several recommendations for service robot design are derived since they have different implications on how users attribute service outcomes. For example, robots with warm (i.e., friendly and trustworthy) design features should be applied if service failures are recoverable, whereas robots with competent (i.e., purposeful and intelligent) design features should be used for successful service outcomes as well as for non-recoverable service failures (Meyer et al., 2022).

Paper fourteen analyzes the service quality on online knowledge platforms. The authors Quingfeng Zeng, Wei Zhuang, Qian Guo and Weiguo Fan scrutinize the performance of so-called grassroots knowledge suppliers and how their characteristics affect the payment behavior of users. Contrary to expert suppliers, who are professionals in their respective field (e.g., doctors, lawyers), grassroots suppliers are non-professionals in that domain, who nevertheless possess expertise to answer questions on Q&A platforms, such as Quora or Stack Overflow. Since these individuals are typically less well-known and lack an expert status, their answers might involve more risk and users might be inclined to pay lower prices than for advice from experts. Based on 12,419 answers from 440 suppliers, the present research analyzes factors such as reputation, experience, authentication and usefulness to determine user payment behavior. The study reveals that content contributions as well as user interactions are more important than expert status and suggests that platform providers should encourage users to contribute to high-quality content and support performant suppliers with more exposures (Zeng et al., 2022).

The general research section terminates with two articles on a future technology that is attributed the potential to profoundly affect and even disrupt digital business. At the outset is another Fundamentals paper, which introduces the constituting concepts of quantum computing. Authored by Roman Rietsche, Christian Dremel, Samuel Bosch, Léa Steinacker, Miriam Meckel and Jan Marco Leimeister, it sets out by deciphering the differences between classical computers and quantum computers. It describes a quantum computing system to consist of three layers (hardware, system software, application) and to possess advantages when applied to three specific problem types (search and graph, algebraic, simulation). Despite possible use-cases already exist, assessing the business implications of quantum computing remains difficult and led the authors to conduct interviews with 21 experts. Among the four directions for future research that were extracted from these interviews are the shaping of quantum computing ecosystems with new opportunities for collaborating actors on the three layers, as well as the broader digital representation of business practices and economic behavior (“datafication”) to render them amenable to quantum computing (Rietsche et al., 2022).

Closely associated with the Fundamentals article is the second contribution on quantum computing. It is an interview with Heike Riel from IBM Research, who is one of the leading scientists in the field of quantum computing. Her views complement the Fundamentals paper in several aspects (Alt, 2022c). In linking to the distinction between gateable quantum computers and quantum annealers, the interview shows how IBM has made progress regarding gateable quantum computers. Beyond the mere number of qubits, the criteria speed, scale and quality are introduced as important challenges that have to be addressed to further increase the performance of quantum computers. In addition, the interview emphasizes the advantages of these universal quantum computers for many application fields in the scientific as well as in the business world. This pertains to complex mathematical problems, which are relevant whenever comprehensive calculations, optimizations or simulations are required. These may be found in the material and natural sciences, as well as in simulations within financial or manufacturing industries. The interview concludes with a critical assessment of the risks and an outlook on future expectations, such as quantum computing’s impact on encryption standards.

With this view into the future, this editorial closes a period of over thirty years where Electronic Markets was published in quarterly issues. Starting from volume 33, the journal will now move on to the model of continuous article publishing (CAP), which was already announced in the editorial of issue 32/2 (Alt, 2022a). As explained in this editorial, the CAP model comes with several changes that streamline processes with an immediate publication in the definitive form and accommodates with the general move towards online-only publications. For authors, this means that online first publications will no longer be necessary since all article details are already final when published online. Although regular appearing editorials will also become obsolete with the disappearance of quarterly issues, editorial contributions may still be expected in the future albeit no longer in a regular frequency.

What has remained and hopefully will remain a constant asset of Electronic Markets is the engagement and active support of its community of authors, editors and reviewers. Like all the extensive issues of volume 32, this last issue rests on many shoulders. These are guest editors that organized the four special issues, the editors that handled the general research papers as well as the reviewers and authors. Many thanks go to all of them!