Finds documents with both search terms in any word order, permitting "n" words as a maximum distance between them. Best choose between 15 and 30 (e.g. NEAR(recruit, professionals, 20)).
Finds documents with the search term in word versions or composites. The asterisk * marks whether you wish them BEFORE, BEHIND, or BEFORE and BEHIND the search term (e.g. lightweight*, *lightweight, *lightweight*).
This article delves into the importance of boundary resources (BRs) in Industrial Internet of Things (IIoT) platforms and their impact on complementor satisfaction and engagement. It introduces a systematic method for aligning BR design with complementor satisfaction, aiming to enhance platform governance and ecosystem dynamics. The article discusses the theoretical constructs of BRs, the quality-satisfaction relationship, and the role of complementors in IIoT platforms. It presents a six-phase method for continuous BR redesign, supported by empirical evidence from the Siemens MindSphere ecosystem. The method includes establishing feedback loops with complementors, collecting meaningful feedback, analyzing feedback to derive quality dimensions, prioritizing quality improvement measures, and redesigning BRs. The article also evaluates the method through focus groups with platform providers and complementors, highlighting its relevance, applicability, and boundaries. The key takeaway is the method's potential to help IIoT platform providers navigate the complex landscape of BR design and complementor satisfaction, ultimately fostering a more engaged and satisfied complementor ecosystem.
AI Generated
This summary of the content was generated with the help of AI.
Abstract
Boundary resources (BRs) designed to increase complementor satisfaction are essential for value co-creation in platform ecosystems, as complementors are not subject to the hierarchical control of a platform provider. Only satisfied complementors will engage in the development of innovative complementary solutions based on a platform. Ensuring this is no easy task for platform providers facing inter-platform competition, while existing research neglects complementor satisfaction as a target variable for platform providers to navigate the (re)design of BRs. To address this problem, the paper reports on an echeloned Design Science Research (DSR) project in the Industrial Internet of Things (IIoT) domain, which resulted in a method to align the (re)design of BRs with complementor satisfaction. Service quality techniques are used to obtain complementor feedback on the perceived quality of BRs, to assess issues and their impact on complementor satisfaction, and to prioritize measures that contribute most to complementor satisfaction, ultimately leading to increased complementor engagement. The method is theoretically grounded in the Expectation‐Confirmation Theory and draws on rich empirical data from the Siemens MindSphere ecosystem. It is instantiated as a prototype and evaluated in 12 focus groups. The method helps platform providers in the IIoT domain navigate the design of rich BR portfolios despite the dynamics of platform ecosystems in the IIoT, and advances research on platform governance.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1 Introduction
Innovation platforms are powerful engines for inter-organizational value co-creation. They use an “inverted” approach, making platform boundary resources (BRs) available to third parties for complementary innovation and generativity at the application level (Parker et al. 2017; Gawer 2020; Hein et al. 2020). This model also shapes the Industrial Internet of Things (IIoT), where IIoT platforms, as a subtype of digital industrial platforms offered by Siemens, Parametric Technology Corporation (PTC), or Microsoft rely on BRs (Pauli et al. 2021, 2025; Arnold et al. 2022). BRs constitute a participation architecture designed to enable and foster complementary innovation and generativity, while maintaining control from the platform provider’s perspective (Ghazawneh and Henfridsson 2013; Svahn et al. 2017). In practice, application programming interfaces (APIs), development tools, documentation, and community events represent exemplary BRs (Ghazawneh and Henfridsson 2013; Dal Bianco et al. 2014). The design of BRs determines the development effort and innovation output of complementors (Wulf and Blohm 2020; Soh and Grover 2022; Gong and Li 2023). For example, well-designed APIs and clear information about them reduce the effort to connect a platform to enterprise and cyber-physical systems. Purposeful social BRs, such as developer documentation and community events, make it easier to understand the platform’s added value (Dal Bianco et al. 2014), while poorly understood development tools can reduce the ability to digitize and analyze factory processes (Petrik and Herzwurm 2020).
As the examples above indicate, and as discussed in more detail in Sect. 2, poorly designed BRs reduce complementor satisfaction, which in turn reduces their engagement and willingness to innovate (Engert et al. 2022, 2023). Since complementors act outside the hierarchical control of the platform provider, there is no guarantee that they will adopt BRs. Moreover, over time, unresolved sources of dissatisfaction with BRs may even lead complementors to multihome or switch to other platforms, despite some vendor lock-in, since the IIoT platform market remains fragmented (Lueth 2019; Chen et al. 2022; Pauli et al. 2021; Mosch et al. 2023). At the same time, the challenge of designing satisfactory BRs is particularly salient in the IIoT context. Compared to platforms in business-to-consumer (B2C) domains, such as video games, platform providers design large numbers of highly specialized BRs (Petrik and Herzwurm 2019; Arnold et al. 2022) to enable architecturally complex and specialized industrial applications (Pauli et al. 2021; Arnold et al. 2022). This architectural complexity and the less standardized, more individualized nature of relationships between platform providers and complementors in the IIoT (Marheine et al. 2021) require systematic and continuous monitoring of complementor satisfaction, coupled with continuous (re)design of BRs.
Advertisement
Existing research highlights the importance of BRs as powerful instruments of platform governance, recognizing that their design influences complementor engagement (Hein et al. 2019; Svahn et al. 2017; Soh and Grover 2022). Research has begun to shed light on different BR archetypes and performance effects of BR design (Wulf and Blohm 2020), design decisions (Gong and Li 2023; Wulfert et al. 2022; Wulfert 2023), and deployment approaches (Hein et al. 2019; Svahn et al. 2017). However, our understanding of how to systematically measure and improve complementor satisfaction in line with BR (re)design, especially in complex business-to-business (B2B) settings such as the IIoT, remains limited. With a few exceptions (Wareham et al. 2014; Svahn et al. 2017; Wulfert 2023), existing work has predominantly focused on more high-level governance mechanisms (Ghazawneh and Henfridsson 2013; Bender 2020; Zapadka et al. 2022; Hurni et al. 2021; Engert et al. 2022; Soh and Grover 2022). Prior platform research does not investigate how platform providers can keep complementors engaged and committed to innovation on their platform. As a result, platform providers lack empirically grounded and actionable approaches to make informed decisions about the ongoing (re)design of BRs based on complementor feedback (Petrik and Herzwurm 2020a, 2020b).
Following Sandberg and Alvesson’s (2011) advocacy for critical problematization, we not only spot the gap above, but also link it to the, to our knowledge, not yet realized idea of a method for the continuous (re)design of BRs in line with the measurement and improvement of complementor satisfaction. This method addresses the so far neglected integrated view of BRs and complementor satisfaction in platform research, especially considering the peculiarities of IIoT platforms. We formulate the following research question. How can IIoT platform providers be supported in aligning the (re)design of BRs with complementor satisfaction?
This study employs the echeloned design science research (DSR) methodology (Tuunanen et al. 2024) to create an artifact with a strong focus on practical relevance (Gregor and Hevner 2013). To address the aforementioned problem, we consider a systematic method as a purposeful artifact type, due to the ability of methods to assist decision makers in goal-directed activities (March and Smith 1995). In particular, the presented method aims to support IIoT platform providers in aligning the (re)design of BRs with complementor satisfaction. The method design is theoretically grounded in the quality-satisfaction relationship (Oliver 1980; De Ruyter et al. 1997; Spreng et al. 2009), which is why service quality techniques are used to obtain complementor feedback and measure complementor satisfaction. The design of the method also draws on rich empirical evidence from the MindSphere platform developed and operated by Siemens in the IIoT domain. The method was instantiated as a prototype, and its relevance and applicability were evaluated with practitioners from 12 platform providers in the IIoT domain. These companies are considered potential users of the method.
Using the theoretical lens of satisfaction and its relationship to quality, we provide a method for the continuous (re)design of BRs by platform providers, which helps them cope with the dynamic and heterogeneous nature of IIoT platform ecosystems. To formalize this design knowledge and make it accessible, each of the six phases of the method includes design principles, followed by operational guidelines and an exemplary instantiation of a web-based application prototype. Our theoretical contribution is a nascent design theory at the intersection of BR and platform governance research streams. We advance the existing knowledge base on complex BR portfolios (Dal Bianco et al. 2014) and their management throughout the platform lifecycle (Hein et al. 2019; Wulfert 2023). Beyond the scope of existing work, our method is unique in that it sets complementor satisfaction as a goal of platform governance and provides prescriptive knowledge on how to achieve this goal.
Advertisement
The paper is organized as follows. In Sect. 2, we introduce relevant theoretical constructs in an integrated research model on BR design and complementor satisfaction. Section 3 describes the research design, which follows the echeloned DSR. Section 4 presents the designed artifact, our method, which we demonstrate through a prototypical instantiation in Sect. 5. Section 6 reports the results of the qualitative evaluation of the method's relevance to IIoT platform providers, its applicability, and the limitations of its use. In Sects. 7 and 8, we discuss implications, limitations, and future research directions, and conclude with the results of the paper.
2 Background and Related Work
2.1 The Integrated Research Model of Boundary Resources Design and Complementor Satisfaction
To design our method, we draw on and integrate existing knowledge on BRs from platform research, as well as on service satisfaction and service quality from marketing research. In their seminal work, Ghazawneh and Henfridsson (2013) propose a BR model, which centers on two interacting drivers behind BR design and use: resourcing and securing. Resourcing refers to the process of designing BRs in response to external contribution opportunities. In these cases, BRs are designed to enhance the scope and diversity of a platform. Securing refers to the process of designing BRs aimed at increasing the control of the platform. To cultivate a platform ecosystem, both drivers need to be balanced by a platform provider.
We contextualize the BR model of Ghazawneh and Henfridsson (2013) in the IIoT domain and extend it with constructs and relationships that address complementors’ perceptions of BR quality, with implications for complementor satisfaction and, further, complementor engagement. In doing so, we provide an integrated research model of BRs and complementor satisfaction, especially considering the specifics of IIoT platforms. This model, shown in Fig. 1, provides the conceptual basis for the proposed method. Furthermore, it shows the intended positive effects of the method on practice: the link between complementor satisfaction and BR (re)design, which in turn enables IIoT platform providers to guide the (re)design of BRs based on measured complementor satisfaction, ultimately leading to higher perceived BR quality. In the following Sects. 2.2 and 2.3, we explain the model with all its concepts and relationships in more detail (see Appendix A in the online supplement for a tabular overview of the relevant constructs contained in Fig. 1; available via http://link.springer.com).
Fig. 1
The integrated research model of boundary resources design and complementor satisfaction
2.2 The Role of Boundary Resources in IIoT Platform Ecosystems
Driven by technological advances in sensors, microprocessors, and connectivity, manufacturing facilities can be enhanced by information and communication technologies (ICT) and integrated into global cyber-physical networks (Oberländer et al. 2018). This transformation, referred to as IIoT, unlocks various value-added capabilities, such as the integration of data from information technology (IT) and operational technology (OT) to improve the efficiency of manufacturing operations or enable new business models (Yoo et al. 2010; Pauli et al. 2021; Oberländer et al. 2018). To realize these benefits, platforms are required as overarching integration middleware, forming an essential part of multi-layered architectures (Yoo et al. 2010; Constantinides et al. 2018; Pauli et al. 2021). Providers of such IIoT platforms develop extensible code bases of a software-based system that collect and integrate data from various industrial assets and devices. These code bases and additional technical support are provided to an ecosystem of third-party organizations that act as complementors and develop industrial applications as complementary solutions (Pauli et al. 2021; Tiwana et al. 2010). In contrast to other platform-mediated domains, such as mobile operating systems (Eaton et al. 2015) or e-commerce (Wulfert et al. 2022), IIoT platforms are characterized by a lower degree of standardization and the coordination of unique resources, such as industrial assets and data distributed along the value chain. This requires a close exchange between IIoT platform providers and complementors during the development of complementary solutions (Pauli et al. 2021; Stoiber and Schönig 2022), supported by BRs as contact points between the organizational boundaries of platform providers and complementors (Petrik and Herzwurm 2020c; Pauli et al. 2021).
IIoT platform providers offer the platform core and BRs as a service to complementors, who use them to provide specialized offerings (Hein et al. 2020; Petrik and Herzwurm 2020a). Thus, BRs are at the intersection of platform architecture and platform governance. Drawing on the concept of boundary objects (BOs), which are information objects that facilitate exchange, common understanding, and coordination among actors when knowledge boundaries exist (Star and Griesemer 1989), BRs help govern distributed development. Building on Ghazawneh and Henfridsson’s (2013) conceptualization of BRs, we adapt the notion for IIoT and define BRs as the software tools and regulations that serve as the interface for the individualized relationship between the IIoT platform provider and the complementor. Accordingly, they can serve multiple purposes. For example, with the help of BRs, IIoT platform providers can not only “open up” a platform to third parties and transfer design capabilities and knowledge to complementors, but also obtain powerful platform governance instruments (Eaton et al. 2015; Hein et al. 2020). Hence, BRs help to manage various tensions related to external contributions from complementors, such as the openness-control or stability-flexibility tensions (Ghazawneh and Henfridsson 2013; Wareham et al. 2014). The manifestations of BRs enable the development of complementary solutions, their interaction with a platform core, and foster knowledge transfer between the platform provider and complementors (Dal Bianco et al. 2014; Svahn et al. 2017; Foerderer et al. 2019). Specific BRs offered in the IIoT include, for example, support for specific communication protocols (e.g., OPC UA) and support for different cloud infrastructures to reduce the effort required for connectivity between industrial equipment and the platform (Petrik and Herzwurm 2019; Pauli et al. 2021).
Ghazawneh and Henfridsson (2013) recognize that BR (re)design is the act of platform providers to develop new or modified BRs in response to perceived external contribution opportunities and control concerns. In the case of the former, BRs then increase the scope and diversity of knowledge resources by providing access to new platform resources – referred to as resourcing (Ghazawneh and Henfridsson 2013). While BRs are essential for resourcing, they are simultaneously used to increase control over a platform and its services through a process referred to as securing. Similarly, in the IIoT, the (re)design of BRs takes place within the sourcing-securing paradigm. However, empirical studies show that in the IIoT, complementors tend to reject strict external control by the platform provider, which can be a reason to decide against joining an IIoT platform ecosystem (Petrik and Herzwurm 2020c). Instead, they tend to value the opportunities that BRs provide to support the outcomes of the complementary solutions or interorganizational knowledge exchange to increase mutual value creation (Soh and Grover 2022; Svahn et al. 2017; Foerderer et al. 2019; Petrik and Herzwurm 2020a; Pauli et al. 2025).
The portfolio of BRs is not fixed but evolves over time. Given the heterogeneity of complementors and their coopetition in the IIoT (Petrik and
Herzwurm (2020c); Mosch et al. 2023), this continuous (re)design of BRs is a complex endeavor. BRs should always fit the needs of complementors, as well as the governance conditions set by the platform provider (Wulf and Blohm 2020; Gawer 2020). In some cases, complementors may even start to adapt BRs to their needs, which may conflict with the platform provider’s interests or disturb the balance in the ecosystem (Ghazawneh and Henfridsson 2013; Eaton et al. 2015; Karhu et al. 2018; Saadatmand et al. 2019). As a result, the (re)design of BRs is not always conflict-free, and changes during the lifecycle can be dialectical (Eaton et al. 2015). The need to manage these tensions highlights that the design of BRs affects the engagement of complementors (Wulfert 2023; Engert et al. 2023). This relationship, mediated by perceived quality of BRs and satisfaction, and theoretically grounded in the Expectation‐Confirmation Theory, is discussed in the next section.
2.3 Increasing Complementor Engagement Through Satisfactory Design of Boundary Resources
According to the Expectation‐Confirmation Theory, satisfaction is the result of a post-purchase judgment. Specifically, customer satisfaction occurs when perceived service quality meets or exceeds prior expectations, whereas disconfirmed expectations lead to dissatisfaction (Oliver 1980; De Ruyter et al. 1997). This judgment is based on reflective processes that influence customers’ future decisions. While the confirmation of quality expectations leads to repurchase intention, the product will not be repurchased if the customer’s quality expectations are not met. Therefore, quality is recognized as the most important antecedent of satisfaction (Oliver 1980; De Ruyter et al. 1997; Spreng et al. 2009). We assume an analogy between the concepts of customer and complementor satisfaction: complementors in IIoT platform ecosystems have certain expectations of BRs, and assess them by comparing the actual quality of BRs with their expectations. This assessment can be done after or during use (Oliver 1980). Therefore, we define the perceived quality of BRs as a complementor’s assessment of the overall superiority or excellence of BRs, formed after or during use and shaped by the complementor’s initial expectations (Oliver 1980; Zeithaml et al. 1988; De Ruyter et al. 1997).
In line with the literature on the quality-satisfaction relationship (Oliver 1980; Cronin and Taylor 1992; De Ruyter et al. 1997; Spreng et al. 2009), we assume that the perceived quality of BRs serves as the primary antecedent of complementor satisfaction. Therefore, while acknowledging possible other factors that ultimately affect satisfaction, such as vendor lock-in or envelopment (Jacobides et al. 2024; Eisenmann et al. 2011), we define complementor satisfaction as the complementor’s response to the evaluation of the perceived discrepancy between initial expectations and the actual quality of BRs (Tse and Wilton 1988). When the perceived quality of BRs matches or exceeds complementors’ prior expectations, similar to research on service quality, satisfaction increases, which translates into continued complementor engagement. This engagement includes the various ways and forms of using platform BRs as post-use cognitive consequences based on complementor satisfaction (Engert et al. 2022; Oliver 1980). Although complementor engagement leads to new complementary solutions and network effects (Engert et al. 2023; Schüler and Petrik 2023), it is not guaranteed because complementors act outside the hierarchical control of the platform provider. Only satisfied complementors are engaged complementors and are thus more likely to create complements, collaborate with other ecosystem participants, provide feedback to the platform provider, and promote the platform (Engert et al. 2022, 2023). Dissatisfied complementors, who perceive the BR design as insufficient to achieve their individual goals, are likely to be discouraged from investing further resources in using BRs and other forms of engagement. Therefore, designing BRs of satisfactory quality helps platform providers to engage complementors in the platform ecosystem (Engert et al. 2022; Weiss et al. 2023).
Figure 1 illustrates this quality-satisfaction-engagement relationship through “inherent” effects, grounded in the underlying theory. In light of these effects, we consider complementor satisfaction as a purposeful target variable for platform providers to navigate the (re)design of BRs. However, existing platform research offers no methodological support for this. Taking this into account, our method aligns the (re)design of BRs with complementor satisfaction, which is determined by the perceived BR quality, ultimately helping platform providers to cope with the dialectical tensions associated with the tuning of BRs (Eaton et al. 2015) and to survive in the inter-platform competition for the engagement of innovative complementors (Svahn et al. 2017; Karhu et al. 2024). Thus, the effects enabled by the method in Fig. 1 are the link between complementor satisfaction and BR (re)design, which in turn enables IIoT platform providers to guide the (re)design of BRs based on measured complementor satisfaction – ultimately leading to higher perceived BR quality. The interaction of both effects – the inherent effects and the effects enabled by the method presented below – completes a beneficial cycle (Petrik and Herzwurm 2019, 2020a; Weiss et al. 2023; Zapadka et al. 2022), as illustrated in Fig. 1.
3 Research Design
Our artifact is a systematic and continuous method that aims to help IIoT platform providers navigate the (re)design of BRs and align them with complementor satisfaction. In general, a method represents a set of steps to achieve goals or solve organizational problems, using constructs and models to clarify a solution space (March and Smith 1995). Methods accumulate knowledge as operational principles, representing nascent design theories and Level 2 DSR contributions (Gregor and Hevner 2013).
The method presented here is the result of a larger research project1 involving iterations and intermediate artifacts (Baskerville et al. 2018). Among existing design genres (Peffers et al. 2018), our research best aligns with DSRM, which focuses on creating artifacts relevant to organizational problems. However, the progression of our research project deviates significantly from the linear nature of DSRM. At the beginning of the research project, empirical studies were conducted to generate sufficient knowledge about the understudied but complex IIoT domain, which is not comparable to B2C. The ongoing completion of the design knowledge led to intermediate artifacts to design the method with practical value for platform providers. Therefore, in order to communicate our non-linear research process adequately and increase transparency regarding the preliminary empirical studies and design decisions made, we use the organizing logic of the concept of “echelons” (Tuunanen et al. 2024). An echelon-oriented approach decomposes a larger problem into a hierarchy of logical subproblems. Solutions to subproblems serve as intermediate results that can be developed, validated, and communicated independently. In combination, the intermediate results contribute to the overall solution and provide legitimacy for non-linear and complex DSR projects (Tuunanen et al. 2024). In line with this, echeloned DSR allowed us to communicate the refinement of the design objective as we gained sufficient domain knowledge. Furthermore, the artifact presented in this paper is the result of four design and development echelons, which are empirically grounded in the precursor studies. This indicates an accumulation of knowledge that culminated in improved artifact designs and evolved into a systematic method. Thus, the echelons encapsulate both the precursor work and the novel design knowledge presented in this paper, streamlining the complexity of nonlinear research and increasing the transparency of knowledge accumulation in the designed method (see Fig. 2).
In terms of problem analysis, we analyzed the knowledge base on BRs in the IIoT domain with the help of additional empirical grounding through expert interviews conducted in our own preliminary research (Petrik and Herzwurm 2020c). We reasoned why complementor satisfaction is a meaningful goal for platform providers in the IIoT, and why the lack of systematic approaches to guide the (re)design of BRs in line with complementor satisfaction is a problem for them. This is because complementors in the IIoT rely on trusted partnerships with platform providers and on the resourcing nature of the BRs offered. In line with this, the initial design objective was to significantly improve complementor satisfaction in IIoT platform ecosystems through BRs. To achieve this objective, we designed a representative set of BR types in the context of IIoT (Petrik and Herzwurm 2019). This set, which culminated in the initial design echelon, fostered our analytical clarity about which BRs are offered by IIoT platform providers, and which predominantly resourcing goals these BRs support from the complementor perspective. In addition, the set of BR types provides the foundation for the subsequent echelons and is ultimately part of the resulting method. To gain a sufficient understanding of the contextualized goals of each BR and to track their evolutionary development, the identified BRs were drilled down to instances at the level of individual updates (demonstration echelon). The empirical data obtained in this and other phases of the research process (see below) are based on a comprehensive, multi-perspective case study of the Siemens MindSphere ecosystem (evaluation echelon).
Siemens is a multinational conglomerate best known for its industrial technology business. To protect its vertical pipeline business from commoditization and to respond to competition in the area of digital IoT-based services (e.g., predictive maintenance), Siemens decided to build up software capabilities to harness more value in the digital era. As a result, the platform that was launched initially enabled internal software development for enterprise IoT applications and was later opened to complementors, enabling the creation of a partner ecosystem. Over the years, Siemens developed an extensive portfolio of industrial devices and in 2015 decided to develop its own IIoT platform, MindSphere. MindSphere is an open operating system designed for diverse industrial applications, with no restrictions on specific manufacturing processes. After a closed beta phase, the platform was made available to complementors in 2018. MindSphere was chosen as a representative IIoT platform to study BR (re)design for two reasons. First, Siemens has openly communicated its intention to build an ecosystem and has made visible efforts towards offering BRs, providing rich data for analysis. In fact, the number of MindSphere complementors grew steadily during the DSR project. Siemens needed to continuously improve its ability to design BRs for complementors, moving away from previous pipeline business models. MindSphere serves as a unique case for observing the design of BRs from the ground up, as Siemens has maintained the BRs for 8 years. Second, analyst benchmarks rank MindSphere as a leading IIoT platform, successfully competing with hyperscaler platforms such as Azure IoT or AWS IoT (Gartner 2021). In line with this, MindSphere’s relevance in industrial applications continues despite technological changes such as the migration of the platform core to the industrial edge, outlasting several competitors. Edge computing involves moving computing, storage and networking capabilities from the cloud to locations where data is generated, such as the industrial shop floor (Cao et al. 2020). Siemens Industrial Edge is a platform to work with industrial data near industrial devices. Siemens has even mirrored its design, including most BRs, to launch an edge platform and expand its platform offering. In the process, the cloud-based platform MindSphere has evolved into the Insights Hub.
The empirical analysis of BRs provided by Siemens (Petrik and Herzwurm 2019) made us realize that the complexity of BRs in the IIoT exceeds that of B2C domains, due to their greater number and interdependencies. It helped us to derive the organizational problem inductively, revealing the lack and justifying the need for a BR (re)design method to increase complementor satisfaction as a target variable. Consequently, our refined design objective is to help IIoT platform providers navigate the (re)design of BRs and align them with complementor satisfaction. To achieve this objective, during the second design echelon, we created a measurement instrument for collecting complementor feedback to identify issues in perceived BR quality and their impact on complementor satisfaction. It is based on the satisfaction measurement techniques proposed by Töpfer and Mann (2008), satisfaction studies for enterprise software as defined by Hayes (2008), and our aforementioned set of BR types. During the subsequent demonstration echelon, we developed a questionnaire to systematically obtain feedback from complementors in the IIoT, here in the course of a satisfaction study with MindSphere complementors (Petrik and Herzwurm 2020a, 2020b), resulting in an additional evaluation echelon. The survey was conducted between May 31 and December 04, 2019. The sample consists of 21 surveyed companies, including four hardware complementors (e.g., component manufacturers and automation suppliers) and 17 software and analytics complementors. The satisfaction study reached approximately 7% of the total MindSphere ecosystem (see Appendix B in the online supplement for details on the complementors surveyed).
For each question, the survey referred to the 15 BR types designed by MindSphere at the time of the survey. Methodologically, the survey combined two techniques for measuring satisfaction from Service Quality Management. The Critical Incident Technique (CIT) allowed for the understanding of memorable situations (Flanagan 1954) associated with the use of the BRs that are critical to complementor satisfaction. Additional questions and Likert scales were used to measure the self-reported importance of and satisfaction with the BRs, according to the SERVIMPERF technique (Töpfer and Mann 2008). Importance provided an additional perspective to measure the effectiveness of the platform provider’s quality improvement efforts targeted at individual BRs. In combination with CIT, respondents were given the opportunity to justify their rating, so that critical incidents had a validating effect on the quantitative ratings. The complete survey can be found in Appendix C of the online supplement. After surveying 21 complementors, we obtained 328 so-called sentiment reports, which contained 379 critical incidents across 15 BR types. We use this empirical data to illustrate the application of the designed method using exemplary BRs in Sect. 4. In particular, concrete quotes and figures from the empirical dataset show how to analyze feedback obtained from complementors.
Since the conducted survey suggested the effectiveness of the applied satisfaction measurement techniques, we have taken the design of the measurement instrument and embedded it in a comprehensive, multi-phased method (extended design). This intermediate artifact (Tuunanen et al. 2024) already essentially corresponds to the final method based on the last design echelon (refined design), as described in Sect. 4. In the last design echelon, we improved the consistency of the method and further detailed some activities based on the evaluation described in Sect. 6. Finally, we derived design principles (DPs) for each of the six phases of the method. The DPs follow the conceptual scheme of Gregor et al. (2020) to formalize the operating principles and make them actionable.
We instantiated the method as a web-based application prototype to demonstrate its technical feasibility and outline how it can be integrated into a platform provider’s natural context of software-supported platform governance (demonstration). We developed two instances of the demonstration echelon type. The first prototype was developed based on the extended design echelon. Along with the improvements made to the method (refined design), we also revised the prototype. Section 5 presents the revised prototype and explains, where appropriate, how it differs from the previous instance.
The last evaluation echelon aimed at a qualitative evaluation of the method and the prototype. To achieve a practice-based evaluation of the usefulness of the designed method (Tuunanen et al. 2024), we conducted focus groups with platform providers and complementors to capture experts’ views on the artifact. Focus groups were chosen because qualitative and contextual feedback from naturalistic users was appropriate to capture the richness of experts’ insights in a moderated dialogue, while due to the novelty of the method, its actual implementation in platform companies would have required an extensive project with new hires (Tremblay et al. 2010). This corresponds to an ex-post naturalistic evaluation (Pries-Heje et al. 2008; Venable et al. 2012).
Following the goals of confirmatory focus groups in DSR (Tremblay et al. 2010) and applicability checks (Rosemann and Vessey 2008), the workshops aimed to evaluate the artifact against three criteria. The relevance to practice aims to evaluate the method’s effectiveness in a specific context and its potential to address the aforementioned problem of platform providers in the IIoT, which is a core principle of DSR (Mark et al. 2007). The applicability and boundaries aim to evaluate how the method can be applied in the real context of an IIoT platform provider, and under which conditions the effectiveness of the method is reduced. The evaluation continued until the feedback from practitioners began to converge, and no substantially new considerations arose for further redesign of the method. At that point, we assumed theoretical saturation, which led to a stopping condition. A total of 59 practitioners from 12 organizations participated in the evaluation (see Appendix D in the online supplement for details on the evaluation participants).
Prior to the workshops, participants received a summary of the method, which had been pre-tested by four academics with expertise in digital platforms. During the focus groups, we presented the concept of BRs, our problem statement, and the method on slides and then guided the practitioners through the phases of the method in the prototype. All statements made by the practitioners were recorded. These recordings were transcribed and subjected to qualitative content analysis. The transcripts were first open-coded to capture the depth of the experts’ statements. Connections between related statements were then established (axial coding) and assigned to the three previously mentioned evaluation criteria (selective coding).
4 Results
Our method aims to support IIoT platform providers in aligning the (re)design of BRs with complementor satisfaction. To this end, service quality techniques are used to obtain complementor feedback on the perceived quality of BRs, in order to assess the issues and their impact on complementor satisfaction. The method consists of six phases, each with specific activities. To provide prescriptive knowledge for other design endeavors, each phase is further formulated as a DP (Gregor et al. 2020). We begin by introducing each phase and the corresponding DP, grounding them in the relevant literature. We then illustrate the application of the DP with empirical examples from the precursory empirical studies. Consequently, real data help to understand how the method should be applied. Figure 3 provides an overview of the phases and the steps involved, while the following subsections describe them in more detail.
Fig. 3
Method for aligning (re)design of BRs with complementor satisfaction
4.1 Establish and Maintain Continuous Feedback Loops with Complementors
DP1:Establish and maintain continuous feedback loops with complementors, preferably lead complementors, to collect the necessary data to guide the evolution of the BR portfolio.
The first phase of the method aims to establish and maintain continuous feedback loops with complementors to collect the necessary data to guide the evolution of the BR portfolio. BRs are constantly evolving in complementors’ perception and change in importance and satisfaction potential over time. For example, BRs that are considered attractive platform attributes become commodities over time and do not generate the same satisfaction potential in the future (Petrik and Herzwurm 2020a). To cope with such dynamics, the platform provider should seek contact with complementors. Engaged complementors are eager to cooperate with platform providers (Engert et al. 2022, 2023). In particular, knowledge exchange and platform co-development are two recognized manifestations of cooperative engagement among complementors. Complementors evidently monitor platform performance and are willing to inform platform providers about perceived quality issues (e.g., bugs or glitches) that negatively impact their development efforts for complementary solutions. Complaints, considered as post-use behavior (Oliver 1980), encompass both perceived quality assessments and satisfaction with the BRs (Petrik and Herzwurm 2020a, 2020b). Therefore, by capturing such feedback from complementors, platform providers can reduce BR design effort (Engert et al. 2022) and make more informed decisions in managing complex BR portfolios with interdependencies and numerous trade-offs in the design of BRs (Wareham et al. 2014; Hurni et al. 2022). Feedback loops also enable platform providers to better understand the sentiments of complementors, as they work closely with end customers and customer use cases through co-creation activities (Ceccagnoli et al. 2012; Petrik and Herzwurm 2020a).
In particular, involving lead complementors in feedback loops strengthens knowledge exchange. Analogous to the involvement of lead users in product design (Von Hippel et al. 1986), lead complementors are particularly well-integrated complementors who can actively help platform providers improve the design of BRs (Weiss et al. 2023). Like lead users, lead complementors face needs that will become common in an ecosystem, but they do so long before the bulk of market participants encounter them. Existing research recognizes that lead complements possess tacit knowledge about efficient software development practices or BR design. Significant improvements can be achieved for both complementors and platform providers if a platform provider captures such knowledge (Sarker et al. 2012; Weiss et al. 2023). Lead complementors can be periodically surveyed about the challenges they face with individual BRs in implementing IIoT applications to ensure continuous monitoring of complementor satisfaction. Such an exchange goes beyond information exchange with online communities (Kauschinger et al. 2021). Therefore, such relationships can be particularly useful in the early stages of the platform lifecycle, as they create long-term opportunities to benefit from complementor knowledge. This is particularly valuable for industrial incumbents, who may lack the capabilities required for managing complementary development, as they are accustomed to pipeline business models (Svahn et al. 2017; Weiss et al. 2020; Marheine et al. 2021).
4.2 Collect Meaningful Feedback from Complementors
DP2:Adapt and use the critical incident technique in feedback loops, combining it with other quantitative and qualitative satisfaction measurement techniques from service quality research to collect meaningful feedback from complementors.
Various techniques for measuring satisfaction exist in the field of service quality and can, in principle, be applied to BRs, since BRs are offered as a service to complementors together with a platform. Although existing platform literature related to complementor satisfaction (Engert et al. 2022, 2023) provides little practical guidance for measuring satisfaction, the Critical Incident Technique (CIT) (Flanagan 1954) has proven effective in our research (Petrik and Herzwurm 2020a). The CIT is a robust qualitative technique that helps measure satisfaction based on memorable incidents that complementors recall from their BR use ex-post (e.g., after using specific APIs) to understand the current state of complementors’ satisfaction with the quality of the BR. To achieve this, critical incidents (CIs) contain satisfaction-critical, action-guiding information, outperforming other satisfaction measurement techniques (Gremler 2004; Hayes 2008). Previous studies have shown that even smaller numbers of CIs provide valid insights into the determinants of satisfaction (Flanagan 1954).
CIT can be enriched with other techniques from service quality research. The prerequisite is that the individual items of a satisfaction survey refer to the individual BRs. We recommend combining the SERVIMPERF technique, which assesses satisfaction and importance separately (Töpfer and Mann 2008), with CIT to provide additional quantification (see Sect. 4.5 for a detailed explanation). In addition, open-ended questions can be asked about BRs that complementors find lacking. Although BRs can usually be discovered by reviewing competitor platforms or developer portals (Petrik and Herzwurm 2019), complementor feedback provides valuable context regarding the purpose of BRs as well as experiences with these BRs.
Table 1 illustrates how meaningful complementor feedback can be captured using CIT, SERVIMPERF, and open-ended questions. Thus, the satisfaction survey items refer to individual BRs. Using APIs as an example, the table includes illustrative incidents reported by different complementors (see Appendix B in the online supplement for the list of complementors), as well as SERVIMPERF ratings for BRs and requests for new BRs. The quotes indicate that CIs provide effective statements about perceived quality problems. Complaints about critical quality problems help platform providers to detect quality issues to identify the most urgent areas for action (Hayes 2008; Oderkerken-Schröder et al. 2000). In the next phase, we describe how this can be achieved through a systematic analysis of Cis (Table 1).
Table 1
A complementor satisfaction survey based on CIT, SERVIMPERF, and open questions
Illustrative excerpt from a stated critical incident
Open coding
Satisfaction item
CIT: Can you remember a positive and/or negative event/incident related to the use of the APIs?
“…Slow ratelimits when uploading. It takes, for instance, over an hour to upload 130 Megabyte. So it [API for handling timeseries data] is hardly optimized for timeseries” – #9
(1). Negative incident and experienced issues with upload of timeseries data
Time and performance
(2). #9 – Software Developer from a software company
“…Complicated creation of assets and types. It must be identical for all devices and applications. Therefore, it is difficult and bears a great potential for error. # 11
(1). Negative incident experienced due to manual and complicated configuration, which is error-prone due to missing functionalities
User error protection, functional completeness, and accessibility
(2). #11 – Team lead from an automation (PLC) supplying company
SERVIMPERF
How important are APIs for the implementation of your IIoT project? – #9
1
2
3
4
5
N/A
How satisfied are you with this resource? – #9
1
2
3
4
5
N/A
How important are APIs for the implementation of your IIoT project? – #11
1
2
3
4
5
N/A
How satisfied are you with this resource? – #11
1
2
3
4
5
N/A
Which resources, in your opinion, are currently lacking but could help you with your application?
Illustrative statement from an open question
Open coding
Boundary resource
“Clear definition of what MindSphere actually is – “without buzzwords like "IoT OS"…”
(2). #11 – Team lead from an automation (PLC) supplying company
Clear definition of the IIoT platform with its capabilities because an automation supplier is unsure what he can do with the platform
(3) Partnership length: 3 months
4.3 Analyze the Collected Feedback from Complementors
DP3:Derive quality dimensions from critical incidents, organize them in quality models, and assess their current impact on complementor satisfaction in order to analyze the collected feedback from complementors in a systematic, evidence-based manner.
Feedback analysis requires two steps for each BR, which are explained below. The examples refer to Fig. 4. It illustrates the coding process for deriving QDs from CIs, organizing them into quality models, and assessing their current impact on complementor satisfaction. First, the data from the feedback loops with different complementors needs to be coded to determine whether the reported incidents are critical (i.e., to discard descriptive statements, generic statements, or personal opinions). During this step, insightful CIs related to multiple quality determinants may also be identified. In addition, the sentiments, i.e., whether the incident was positive or negative, must be determined. For example, complementor #5 reported the CI “… Last transmitted values of variables of an edge device can only be queried via a workaround. Feature Request is open …”, which was coded as “Workaround required” with a negative sentiment, because the complementor expects a different behavior of the BR and is still waiting for a solution.
Fig. 4
Illustration of the coding process used to create quality models for each BR
Second, axial coding needs to be applied to consolidate and group similar CIs into categories, each requiring at least one CI. Because these groups aggregate related CIs, they can be considered quality dimensions (QDs). The categories represent sub-attributes of BRs related to specific characteristics or functionalities (Oliver 1980; Hayes 2008). Since QDs are derived from the CIs and represent characteristics perceived by complementors, their degree of fulfillment (i.e., perceived quality) is judged against the complementor’s expectations. Consistent with the definition of satisfaction (Oliver 1980; Hill and Alexander 2000), QDs determine the perceived quality of each BR. Existing studies show that complementors perceive many QDs, especially for commonly used BRs (Petrik and Herzwurm 2020a). Accordingly, complementor feedback can help identify a significant source of sub-attributes that are critical to satisfaction. Standards, such as the ISO 25010 for system and software quality (ISO/IEC 2011), can help platform providers label and define QDs for technical BRs. For example, the three CIs from complementors #5, #13, and #6 in Fig. 4 were grouped into the category “Functional completeness”, following ISO 25010.
The two steps above should be performed for each BR because BRs in the IIoT are heterogeneous and not comparable. They aim to support different aspects of platform-based innovation (Petrik and Herzwurm 2019). As a result, each BR requires its own quality model, which allows a differentiated analysis of the QDs for each individual BR. Such quality models can have a hierarchical structure, which is in line with most software quality models (Hayes 2008; Miguel et al. 2015). In addition to ISO 25010, many other standards and guidelines can be used to inductively derive QDs and form quality models for BRs, such as guidelines for organizing corporate hackathons and understanding participants’ expectations (Nolte et al. 2018). If the standards or guidelines do not align well with the CIs, the QDs can be labeled inductively based on logical reasoning by comparing the purpose of each BR and the reported CI. For example, feedback on the events organized by Siemens was collected from various complementors. Positive CIs mentioned the added value of advanced information announced at the events and learning about the technical innovations of the platform. Following deductive labeling, we conclude that the platform provider aims to share exclusive information with complementors at events. Therefore, we grouped the corresponding CIs into the QD “Exclusive information”.
4.4 Derive Quality Improvement Measures
DP4:Apply a prioritization approach to focus primarily on the measures that have the greatest impact on satisfaction to derive improvement measures from the quality models for each BR.
To focus on the quality improvement measures that have the greatest impact on satisfaction, we propose the prioritization approach illustrated in Fig. 5. Presented as a decision tree, it is applied to each QD of a BR. The goal is to reduce catalogs of QDs, which can be extensive, to hotspots and to break down longlists of measures into manageable shortlists. This helps to identify and eliminate the most significant sources of dissatisfaction while tracking the possible decline of the satisfactory ones. In the following, we explain the prioritization approach using the empirical dataset from a satisfaction survey of MindSphere ecosystem complementors. The survey used CIT and yielded 328 sentiment reports (Petrik and Herzwurm 2020a). This feedback covered the 15 BR types designed by Siemens, and their analysis resulted in a total of 379 CIs, as some reports included more than one CI.
Fig. 5
Prioritization approach to focus BR (re)design measures
The prioritization is based on the statistical analysis of the CIs grouped into QDs, which guides the prioritization of improvement measures based on their impact on satisfaction (Johnston 1995). The first step in prioritization is to quantify the qualitative feedback and sentiments associated with each QD. This involves a systematic cataloging of all the BRs offered by a platform provider and their QDs, followed by ranking them by the frequency of the associated CIs and their sentiments – i.e., positive or negative. In the example of the QD “Functional completeness” in Fig. 5, according to the analyzed feedback from complementors, 7 positive and 14 negative critical incidents occurred.
The different sentiments about the CIs related to a specific QD indicate that a QD can cause satisfaction, dissatisfaction, or both, depending on its fulfillment. Accordingly, the QDs derived for each BR are then classified as “Satisfier”, “Dissatisfier” or “Critical”. A QD is classified as “Satisfier” if only positive CIs have been reported. If only negative CIs have been reported, the QD is classified as “Dissatisfier”. A QD is classified as “Critical” when both positive and negative CIs have occurred. This tripartite classification takes into account the modern understanding of non-linearity in satisfaction (Johnston 1995; Johnston and Heineke 1998). In the example of the QD “Functional completeness”, “Critical” applies.
The prioritization approach proposed here combines the classification just described with the significance of each QD to complementor satisfaction (Johnston 1995). The latter is determined in three steps. First, the number of CIs related to a QD must be divided by the total number of CIs collected, resulting in relative weightings for each QD:
Let \(C{I}_{i+}\) represent the positive critical incidents related to quality dimension i
Let \(C{I}_{i-}\) represent the negative critical incidents related to quality dimension i
\(C{I}_{i}=C{I}_{i+}+ C{I}_{i-}\) represents the sum of all critical incidents for a quality dimension i
\({CI}_{total}={\sum }_{j}{CI}_{j}\) represents the overall sum of all collected critical incidents
Then the resulting \({W}_{i}=\frac{{CI}_{i}}{{CI}_{total}}\) gives the relative weighting for a quality dimension i
Second, we suggest calculating the arithmetic mean of the number of CIs per QD and dividing it by the total sum of all CIs collected to calculate a threshold value that serves as a benchmark for assessing the significance of each QD:
Let \(C{I}_{avg}\) represent the average value for the critical incidents across all quality dimensions
Let \(\overline{CI }\) represent the arithmetic mean for the number of critical incidents per QD
The then resulting \(C{I}_{avg}= \frac{\overline{CI}}{{CI }_{total}}\) gives the threshold value
Third, the relative weighting is compared with this threshold value:
If \({W}_{i}>C{I}_{avg}\), the quality dimension i is considered significant for complementor satisfaction
If \({W}_{i}<C{I}_{avg}\), the quality dimension i is considered non-significant for complementor satisfaction
According to the empirical dataset from Petrik and Herzwurm (2020a), each QD comprises an average of 3.29 CIs. Divided by the total of 379 CIs, this calculation yields a threshold of 0.0087 (approximately 0.01 or 1%). Any QD with a calculated relative weighting above 1% is considered significant to complementor satisfaction, while those below this threshold are deemed insignificant. Given the large number of QDs derived from active complementor ecosystems, such low significance values are common (Hayes 2008). Accordingly, the more CIs are associated with a particular QD that is deemed significant, the greater its impact on satisfaction. As shown in Fig. 5 and Table 2, “Functional completeness” is classified as critical and also significant with a relative weighting of 0.055 that is above the threshold of 0.01. In addition, Table 2 shows an excerpt of other cataloged QDs for APIs, ranked by their calculated significance and current satisfaction impact. “Functional completeness” is at the top of the ranked catalog (a complete catalog of QDs is available in Appendix E in the online supplement, and a complete dataset with the CIs collected and assigned to QDs is available in the online supplement) (Table 2).
Table 2
Cataloged and sorted exemplary quality dimensions of APIs
Boundary resource
Quality dimensions (# of related critical incidents)
Positive incidents
Negative incidents
Sum of incidents
Current satisfaction impact
Relative weighting
Significance
APIs
Functional completeness
7
14
21
Critical
0.055
Yes
API Documentation
5
12
17
Critical
0.045
Yes
–
–
–
–
–
–
–
User error protection
0
3
3
Dissatisfier
0.007
No
–
–
–
–
–
–
–
Support
1
0
1
Satisfier
0.002
No
The classification of QDs as “Satisfier”, “Dissatisfier”, or “Critical” combined with their significance (yes/no) guides the prioritization and definition of quality improvement measures:
Significant “Criticals” and significant “Dissatisfier” require immediate quality improvement measures with high prioritization. In the example of the QD “Functional completeness” (“Critical”), the quality problem is addressed by “Improve data processing engine for time series” (see Fig. 5).
Significant “Satisfier” received only positive feedback and, therefore, do not require action but should still be monitored to preemptively detect any decline in quality.
Likewise not significant QDs, regardless of whether “Satisfier”, “Dissatisfier” or “Critical”, can be relegated in in priority but should be monitored. However, it is also advisable to assess whether unreported quality issues might be affecting complementors. To do this, it is recommended to monitor dimensions and consul lead complementors about potential problems within the QDs of these BRs (Weiss et al. 2023).
4.5 Increase the Impact of Quality Improvement Measures
DP5:Complete the assessment of quality dimensions with SERVIMPERF and distinguish between different types of complementors to increase the impact of quality improvement measures on overall complementor satisfaction.
For many BRs, the prioritization described above may not always be clear. For example, different BRs may receive similar amounts of feedback and have identical calculated values for significance and dissatisfaction – as is the case with “Standardization” and “Availability” in the satisfaction survey of MindSphere ecosystem complementors by Petrik and Herzwurm (2020a) (see Table 3). In such cases, complementor feedback captured using SERVIMPERF (see Table 1 in Sect. 4.2) is incorporated into the prioritization, providing additional metrics. SERVIMPERF uses two distinct components to measure quality perceptions: satisfaction and importance. The separate collection of data on these two components for each BR allows an analysis of the relationship between the complementor’s perception of BR quality and the importance of the BR for his/her use cases. To refine the prioritization of improvement measures, the sums of the values for the two components collected using five-point ordinal scales in Table 1 can be subtracted (known as delta analysis). For better visualization, satisfaction and importance can be plotted on a coordinate plane. This allows an Importance-Performance Analysis of the BRs (Töpfer and Mann 2008). The difference between absolute importance and satisfaction in the MindSphere ecosystem survey was 430 for APIs and 150 for the Cloud Foundry container management service. A larger gap between the importance and satisfaction suggests that quality measures for APIs should be prioritized. Therefore, as Table 3 shows, the standardization of APIs is prioritized over the issues with Cloud Foundry availability, even though both QDs are significant “Criticals” with the same relative weighting of 0.011. This analysis helps to increase the impact of quality improvement measures by prioritizing those BRs that are qualitatively underperforming despite their high importance to complementors (Table 3).
Table 3
Exemplary ambiguity between two QDs of different BRs
Importance rank
SERVIMPERF delta
Boundary resource
Quality dimensions (# of related critical incidents)
Positive incidents
Negative incidents
Sum
Current satisfaction impact
Relative weighting
Significance
#1
430
APIs
Standardization
3
1
4
Critical
0.011
Yes
–
–
–
–
–
–
–
#5
150
Cloud foundry
Availability
3
1
4
Critical
0.011
Yes
–
–
–
–
–
–
–
Furthermore, the feedback collected on specific BRs may vary significantly between different complementors. Complementors differ in their perception and valuation of BRs based on their experience, knowledge, or goals (Zapadka et al. 2022; Weiss et al. 2020; Petrik and Herzwurm 2020c). To illustrate this, Table 4 provides quotes to shed light on how two different complementor types perceive the partner events organized by Siemens and the different QDs expressed by these two exemplary complementor types. Statements [#4, #5] in Table 4 indicate the existence of dissatisfied automation providers because their platform usage goals – native integration of the platform connectors into their own hardware – were not taken into account, and the focus was exclusively on app development. Conversely, there are software companies for whom the technical depth of the events was insufficient [#3]. They were simply used to a different level of information from platform natives. In addition, the inexperienced appreciated the additional support opportunities offered at the events [#2]. Software start-ups without a reputation perceived the opportunity to acquire their first customers with the help of the event very positively [#1]. We conclude that different types of complementors expect BRs to fulfill different goals, and that experience or reputation in the domain may influence these goals. For the platform provider, this results in conflicting objectives that need to be taken into account when defining measures for the (re)design of BRs (Table 4).
Table 4
The differences in the satisfaction measurement between different complementor types
Boundary resource
Quality dimensions
Complementor type
Exemplary critical incidents
Sentiment
Code
Events
Networking and matchmaking
Software developing Start-Up
#1: “We won an Open Space together with [anonymized] and presented at the Hannover industrial fair … That was community.”
Positive
First business opportunity
Support
Data analytics provider
#2: “Support by an experienced MindSphere Developer is indispensable to get results in a short time. It shortened our time.”
Positive
Possibility for individual complementor care
Availability of relevant information
Software developing company
#3: “At the MindSphere Meet-Ups half of the people were from Siemens itself and it was extremely superficial what was talked about. So it wasn’t like Google or Apple where you have deep dive techtalks going on.”
Negative
Insufficient technical depth of information
Automation provider
#4: “For us it is not very relevant because we do not want to implement our own MindSphere solutions in the near future.”
Negative
Non-consideration of automation use cases
Automation provider
#5: “Also the Deveveloper Meet-Ups are of little use to us because it's not about connectivity, but about MindSphere solutions.”
Negative
Non-consideration of automation use cases
Software developing company
#6: “Partly you also learned about technical innovations but mainly networking.”
Positive
Exclusive Information
Differentiating satisfaction measurement across different complementor types helps to better assess complementor-related conflicts in BR design (Hurni et al. 2021). Complementor types can be distinguished based on their role in the ecosystem, such as automation providers or software companies, as well as complementors’ competencies, integration level, or partner status. Like lead complementors (Weiss et al. 2023), multihoming complementors can provide more in-depth feedback than inexperienced complementors. Given that complementor roles and their BR needs may change over time (Heimburg and Wiesche 2022; Petrik and Herzwurm 2020a), additional consideration of complementor types helps track complementors’ changing needs. This is necessary for platform providers to make complementors feel that they are listening to the community, positively influencing them if a platform provider communicates efficiently, and even eventually resolving conflicts between complementors (Sarker et al. 2012; Kauschinger et al. 2021; Weiss et al. 2023).
4.6 (Re)design Boundary Resources
DP6:Offer a broad BR portfolio to compete in the IIoT platform market, due to the architectural complexity of IIoT solutions and the changing satisfaction potential of BRs and complementor needs.
When operating an open platform (Karhu et al. 2018), it is essential for the platform provider to manage BRs as a portfolio, as BRs undergo a lifecycle after the initial design (Wulfert 2023). BR portfolio thinking involves the understanding that BRs are not static, but undergo an evolution during the platform lifecycle (Wulfert 2023). Two triggers initiate this evolution:
1.
During the platform life cycle, various quality issues arise with BRs (Petrik and Herzwurm 2020a, 2020b), and the importance or satisfaction impact of BRs on complementors for IIoT application development may decrease.
2.
The need for new BRs may arise, as complementor roles are unstable and complementor needs change over time (Heimburg and Wiesche 2022).
The first trigger is the main focus of the method presented so far. However, in order not to overlook the potential needs of complementors for new BRs, the complementor satisfaction survey from Phase 2 (see Table 1 in Sect. 4.2) also includes an open-ended question about BRs complementors that complementors consider to be currently missing but helpful. The answers to this question can also be used by platform providers to design new BRs and to differentiate themselves from competitors. For example, there are different value-creation strategies among IIoT platform providers (Mosch et al. 2023). Accordingly, their BR portfolios differ from each other (cf. Petrik and Herzwurm 2019 for Siemens MindSphere; Petrik et al. 2021 for AWS IoT). To expand their BR portfolios, platform providers can use the set of BR types identified by Petrik and Herzwurm (2019) as an orientation and the model developed by Dal Bianco and colleagues (2014) to understand that BRs should be offered as a portfolio that supports complementor needs. Following Dal Bianco et al. (2014), it is recommended to conceptualize different classes of BR, such as application BRs, development BRs, and social BRs, rather than focusing on a single BR. These classes can be extended to include different manifestations of BRs, providing an extensible portfolio structure.
5 Demonstration: Prototype of a Web-based Application for the Method
We have instantiated the method from Sect. 4 as a prototype to demonstrate technical feasibility and to outline how the method can be integrated into a platform provider’s natural context of platform governance supported by enterprise software. We developed two instances. An initial prototype was developed based on the extended design echelon. However, as we refined the method (refined design), we also revised the prototype.2 Instead of combining multiple charts and metrics into a single dashboard, the revised prototype distributes them across multiple dashboards for clarity. The landing page links to three functional areas: (1) an online questionnaire to collect feedback from complementors; (2) dashboards for platform providers; (3) a dynamic report for platform providers. Figures 6, 7, and 8 illustrate the three functional areas, linking the instantiation to the DPs from Sect. 4.
Fig. 6
Functional area 1 of the prototype: online questionnaire to collect feedback from complementors
The first functional area is an online questionnaire which complementors can openly access to report quality issues. Alternatively, platform managers can distribute the questionnaire to complementors on a regular basis. Both scenarios help to establish and maintain feedback loops with complementors (DP1). The implemented questionnaire allows complementors to select specific BRs from a representative set of 15 BR types (DP6) and provide feedback in the form of positive or negative critical incidents (DP2). In addition, quantitative ratings of BRs are provided using five-point ordinal scales for importance and satisfaction (DP2). Finally, the questionnaire provides the opportunity to request additional BRs (DP6). Since different types of complementors in the IIoT may value attributes of BRs differently, potentially leading to conflicts, as described in Sect. 4.5, the questionnaire also collects mandatory information about complementors (DP5). The form can be flexibly adapted to include new BR designs, synchronizing feedback loops with the expansion of the BR portfolio (DP1, DP6).
The online questionnaire is connected to a database management system (DBMS) that stores and processes the feedback data. The DBMS is the backend for functional areas 2) and 3), which are intended for internal use within the platform-providing organization. The prototype allows for customizable workflows to control information, visualize data, and automatically calculate metrics, which is similar to the functionality of modern computer-aided quality (CAQ) systems. In addition to the ability to view feedback through dashboards, notification workflows send dashboard snapshots to departments involved in the quality management of BRs.
The second functional area is a set of seven dashboards that visualize feedback data in near real-time. Dashboard 1 contains four visual elements: (1) a stacked bar chart with the number of positive and negative sentiment reports per BR; (2) a stacked bar chart with the number of positive and negative CIs per BR; (3) a short list of the most recently collected CIs, including company name, complementor type, affected BR, incident description, importance, satisfaction; (4) a doughnut chart showing which complementors have recently experienced and reported problems. This functionality aims to analyze the collected feedback in a systematic way (DP3).
Dashboards 2 and 3 help to build the quality models for each BR, integrating different QDs derived and enriched by CIs (DP3). Dashboard 2 displays the coded QDs as lists. Heatmaps in the lists show the amount of feedback per QD. They help to determine the relative importance and, thus, the criticality of the QDs. Two lists are implemented. The first list contains those QDs that are rated as significant to complementor satisfaction (as described in Sect. 4.4). This list shows only “Criticals” and “Dissatisfiers”, sorted by the number of “Dissatisfiers” to help platform managers pin down and reflect on the areas for action. The second list shows only “Dissatisfiers” (regardless of current significance), enabling comparison of different influences on satisfaction and facilitating targeted action. Dashboard 3 displays tables containing all CIs and the QDs derived from them for two sample BRs. Various other tables can be implemented here to analyze the content of the CIs at the level of individual BRs, code them, and assign them to QDs. This helps platform managers to derive concrete improvement measures from the quality models (DP4).
Dashboard 4 visualizes four sample quality models as heat maps, using different box sizes to indicate the varying impact of each QD on satisfaction. Additional quantitative feedback from SERVIMPERF is converted into four distinct metrics in Dashboard 5 to increase the impact of quality improvement measures on overall complementor satisfaction (DP5). Two bar graphs show the collected scores for relative importance and average satisfaction scores for each BR, and a ranking is generated based on the delta analysis (see Sect. 4.5). Finally, a coordinate plane (relative importance and average satisfaction; low, high) helps to identify the BRs that are most important to complementors and that perform poorly.
Dashboard 6 summarizes both positive and negative CIs for specific QDs of individual BRs in a tabular format, allowing a comparison of feedback from different types of complementors (DP5). This helps to increase the impact of quality measures, e.g., by focusing on the most frustrated complementors, or by addressing the broader ecosystem rather than the interests of a few individual complementors. In this way, conflicts of interest between complementor types are minimized. In addition, Dashboard 7 summarizes requests for new BRs and provides complementor type information to better understand the context of the request (DP6).
The third functional area generates a dynamic report that provides a more detailed view of the most urgent areas for action for the entire BR portfolio. It is a tabular overview of the QDs with the most frequently reported quality issues. Compared to the dashboards of the second functional area, these are already prioritized and summarized using calculations from Sect. 4.4 so that only significant “Criticals” and “Dissatisfiers” are included. For prioritization between QDs with the same significance, the BR importance ranking is implemented. In addition, the dynamic report of functional area 3 includes two extra columns that indicate whether quality measures have already been defined for the most urgent quality issues and whether the responsible units already implement the measures. Traffic light colors indicate the status of the definition of quality measures and the processing of the measures.
6 Evaluation
We conducted focus groups with platform providers and complementors to evaluate the artifact. A total of 59 practitioners from 12 platform-providing organizations participated until the stopping condition was reached. The evaluation allowed us to draw conclusions about the method’s relevance to practice, as well as its applicability and boundaries in the real context of IIoT. Thus, the evaluation was mainly aimed at assessing the artifact’s usefulness. The following quotes are illustrative excerpts from the focus groups, with the numbers in brackets referring to the company identifiers in Appendix D of the online supplement.
Relevance. The experts confirmed the practical relevance of the method. They highlighted several aspects that indicate that the method brings benefits and solves relevant problems in the practice of complementor management. First, the experts confirmed that complementors prefer those platforms deemed potent enough to design BRs of satisfactory quality, stating that: “… they [complementors] go where it is technically easiest to implement. That means partners, and that is the most important thing, they also have alternatives … They sell what their developers want to work with and what they are most effective with. This is also their profit margin. With this approach, we have a quick focus on BR to quickly achieve added value.” (#3). Thus, the experts declared that BR design helps to engage complementors with unique capabilities required for creating IIoT complements (Sarker et al. 2012; Pauli et al. 2021). One group of experts recognized their own BRs and commended on the informed approach to expand the resource portfolio, stating: “We were also at the Hanover trade fair with our partners, for example. Considering this experience, I found your evaluation in the 15 resource areas quite insightful, because these are really things I felt connected to and recognized in our company.” (#8).
The experts also recognized the potential of the method to strengthen collaboration with complementors. According to them, the method helps to achieve not only alignment on the BR portfolio based on feedback loops with complementors, but also internal alignment between the “back office” and the “forefront” in the platform organization:
“There are no mechanisms for the APIs themselves to give feedback to developers of a specific API. And with the events, there are feedback mechanisms everywhere, but often events like [event name] are organized by an organization, which then ensures that the feedback is collected within this event channel so that the next event can be set up better. But I don't think you can say that we have an overarching view where we look at how well which things work and where we perhaps need to put a little more in to be able to compare things better. … I think, when I see the whole thing (method), and I had been in partner management for years before, that we partly do things intuitively and not as structured as shown in your presentation.” (#3)
“Once you understand the BR concept, it's also about conveying a feeling that partners share in developing BR. You have to convey a feeling that the keystone and the partners are working together. …” (#11)
“So, of course, there are immensely many approaches out there, but I understand that some [platform competitors] are really only at the beginning with some things. And they exchanged too little with the partners at the beginning. …” (#12)
The experts emphasized that the method helps to get a holistic picture and to make informed decisions about quality measures at the BR level by narrowing down the areas for action in the BR (re)design despite the complexity of a BR portfolio. It seems that despite the technical possibilities of monitoring BRs in the experts’ companies, the data are not systematically used to navigate the quality issues of complex BR portfolios during their lifecycle, which would be desirable:
“…I found this (approach) very exciting … where we should focus and prioritize [the platform development].” (#12)
”Results deliver great insights on what you should do once the people are on the platform onboarded.” (#2)
“The approach gives you an overall picture because we're working a lot on individual construction sites [..] rather intuitively.” (#10)
“… you need to be clear about which resources you want to prioritize in order to then take the right measures to determine where they stand for our partners/customers and how much we need to take care of them.” (#12)
“…features are tried out and then dropped and not used … We also have user-based tracking. That means we also really look at which features are used and how much … and draw conclusions from this, but not so much about which BRs we need to improve …” (#3)
In fact, only one platform provider admitted relying on software support to monitor BRs and integrate community feedback into BR (re)design. However, this particular software consisted of multiple fragmented tools and only allowed monitoring of a developer forum and support tickets. Since our method is supported by an integrated prototype and provides means for active feedback collection, we conclude that the method and its instantiation are an improvement over the state of practice: “There is automated monitoring of all platform services (APIs, Cloud Foundry, Cloud Infrastructure, etc.). There is automated sentiment analysis for the forums, resource usage is tracked, reported tickets are evaluated etc. The functionality is covered by many tools, not a central tool you can buy.” (#1).
Applicability and Boundaries. The experts confirmed the applicability of the method in practice and pointed out important conditions for its effectiveness. In particular, partner managers appreciated the techniques for collecting and processing feedback to prioritize BR quality measures. They also emphasized that, in addition to our prototype implementation, the method could be implemented using standard tools available on the market and widely used in practice:
“We actually have the Qualtrics tool, where you could do surveys directly with the users. And I think your survey itself would be a great template for our tool.” (#3)
“Moreover, I think a partner questionnaire is very exciting for evaluating certain aspects with partners and finding strengths and weaknesses [in our BR portfolio]. It's a pragmatic way.” (#11)
“There is automated monitoring of all platform services (APIs, Cloud Foundry, Cloud Infrastructure, etc.). There is automated sentiment analysis for the forums, resource usage is tracked, reported tickets are evaluated etc.…” (#1)
One group of experts particularly appreciated the ability to distinguish between complementor types, when prioritizing quality measures, stating: “As soon as you get to the platform and end up with a multi-sided market as a result, each group thinks differently. … You have to approach it in a very differentiated and strategic way.” (#1).
However, effective use of the method requires organizational changes. These include the creation of cross-functional (i.e., boundary spanning) roles and the implementation of stabilized processes that align the continuous (re)design of BRs with partner management and promote internal knowledge sharing. As a result, implementing the method in a platform organization was seen as relatively challenging due to the effort required for continuous data collection and analysis, as well as the investment in software and BRs. The experts explained:
“You need an organizational change in order for it to really take effect across departments.” (#8)
“The other is the internal feedback loop. I think a big company like SAP has a separate department for each different type of resource and [these departments] are mainly interested in feedback on the same resource. They're not interested in comparing different resources, nor they are working on it, for example, to get the developer portal, training department, developer ecosystem to pull together to see how best to use them together. … we simply want to acquire thousands and tens of thousands of customers relatively quickly on the basis of one ecosystem and one platform, and we need to apply this much more to experience management.” (#3)
“I think the approach is helpful and exciting, but it's difficult to keep it up to date. … how do you keep it [BR portfolio] up to date? It's time-consuming.” (#5)
The experts also shared that the applicability of our method depends on the platform provider’s approach to market penetration. Some platform providers have moved away from a complementor-oriented approach and have shifted their focus from BRs to full coverage of their own vertical applications for the sake of higher profits. In addition, complementors with an industrial (i.e., heavy asset) background value the available support, discount programs, and ease of industrial asset integration more than a satisfactory designed BR portfolio:
“[Our platform company] is currently on the track to sell more vertical solutions.”. (#2)
“… you can sell an application much faster and much better than a technology that is used to implement some use case at the end.” (#8)
“You usually sell it by telling that it has Azure support and Azure discount programs or it can give you a good discount if you can bundle it with other asset management tools or if you are running [company name] machines so our software packages inside the platform will work seamlessly with your machines. …. And industrial companies, because they operate with very low margins, they do not tend to have large software developer capacity and so on compared to a new tech company. They are very slow to adopt this new paradigm.” (#2)
In line with Hein et al. (2019), we see a boundary for the effectiveness of the method when the platform provider’s market penetration approach focuses on the demand side and neglects complementors, for example when a platform provider mainly targets machine-operating end customers. Such platform users are typically not interested in BRs for complement development, but rather value seamless platform integration, which is consistent with the observation that industrial end-users tend to form direct strategic alliances with platform providers, bypassing complementors (Hein et al. 2019).
In addition, industrial customers of complementors (i.e., the demand side of the IIoT ecosystem) may have technical dependencies on enterprise software from a particular vendor. For example, if the enterprise software of an industrial end customer is dominated by a single software vendor, such as SAP or PTC, its native connectivity may compensate for the quality shortcomings of other BRs, indicating that IIoT platforms do not necessarily compete at the BR level in certain scenarios:
“… unless you have a major customer on the hook, who says I just have [platform name] as my (internal) standard …” (#9)
“… the reality is that our deals are then still determined by the fact that there's the end customer who already has a [platform name] and who prefers an affiliate [platform name partner]. …” (#3)
7 Discussion
7.1 Theoretical Implications
Using the theoretical lens of satisfaction, its relationship to quality, and integrating the empirical context of the IIoT domain, we provide a method to support continuous BR (re)design. To formalize the design knowledge and make it accessible, we derive DPs for each of the six phases of the method, formulated according to the conceptual scheme of Gregor et al. (2020). Each DP is followed by operational guidelines and an exemplary instantiation (demonstrator). Our theoretical contribution is a nascent design theory (Baskerville et al. 2018) at the intersection of the BR and platform governance research streams, as shown in Table 5. We extend existing knowledge on complex BR portfolios (Dal Bianco et al. 2014) and their management throughout the platform lifecycle (Hein et al. 2019; Wulfert 2023) (Table 5).
Table 5
Components of our nascent design theory for continuous BR (re)design
Component
Description
1. Purpose and scope
The aim is to provide prescriptive guidance to IIoT platform providers on how to align BR (re)design and complementor satisfaction by systematically measuring satisfaction with BRs and, based on the feedback, navigating actions to improve perceived BR quality
2. Constructs
IIoT platform provider, complementors, BRs, BR (re)design, Sourcing, Securing, Perceived Quality of BRs, Complementor Satisfaction, Complementor Engagement, Inherent effect, Effect enabled by the method. See Fig. 1 respective Sect. 2
3. Principles of form and function
The artifact is a systematic method consisting of six phases. To formalize the design knowledge and make it accessible, each phase is backed by a DP, followed by operational guidelines and an exemplary instantiation (demonstrator). See Sects. 4 and 5
4. Artifact mutability
We use previously developed empirical knowledge as an illustrative example to demonstrate how the method works and can be applied. Thus, each phase is determined by a DP and instantiations via empirical data. However, other instantiations are possible for each phase, for each individual BR or complementor type mentioned
5. Testable propositions
To evaluate the artifact, we conducted focus groups with platform providers and complementors. In total, 59 practitioners from 12 organizations participated in the evaluation. Our conclusions from the evaluation lead to testable propositions about the relevance of the method to practice, and its applicability and boundaries in the real context of an IIoT platform provider
6. Justificatory knowledge
The method draws on the Expectation-Confirmation Theory from service satisfaction and service quality research, which posits perceived quality as an important antecedent of satisfaction, and links it to the construct of complementor engagement from platform research
7. Principles of implementation
The method description, as well as the description of the demonstrator and the evaluation, contain some information about the implementation of our method. For example, boundary spanners within the platform organization, as well as a strategic approach to evolving the platform with complementors, are required. However, a detailed description of how to implement the design theory in specific contexts remains to be done
8. Expository instantiation
An example instantiation is provided by the demonstrator in Sect. 5. It is a web-based application that implements a module for capturing complementor feedback on BRs and a module for processing the feedback. It also has four dashboards, one for each step of the method
The artifact creates a beneficial cycle by measuring complementor satisfaction through the latter’s feedback and (re)designing BRs by improving the perceived quality of BRs to increase complementor satisfaction and, ultimately, their engagement. Therefore, we interpret our method as a novel artifact that operates at a meta-level compared to individual BRs, as opposed to an additional BR. It helps platform providers overcome knowledge boundaries (Foerderer et al. 2019) regarding the design of BRs and their impact on complementor satisfaction and dissatisfaction. Thus, beyond the scope of existing work, our method is unique in that it sets complementor satisfaction as a goal of platform governance and provides guidance on how to achieve this goal. The design knowledge we propose suggests to consider complementor satisfaction as an additional dimension of platform ecosystem dynamics. In particular, we highlight the dynamic nature of the satisfaction potential of BRs that emerges when complementor satisfaction with the quality of BRs is regularly measured and processed for platform ecosystem governance. Our work supports and extends the findings of Eaton et al. (2015) by recognizing that BRs require tuning, which should be driven according to their ever-changing impact on complementor satisfaction (Eaton et al. 2015; Engert et al. 2022), which in turn depends on the perceived quality of BRs. Thus, our work highlights the link between BR quality, complementor needs, and satisfaction, which is valuable for understanding how to launch and sustain platform ecosystems. In doing so, we advance knowledge on platform governance towards incorporating the satisfaction perspective (Wareham et al. 2014; Hurni et al. 2021). Against this backdrop, for example, novel platform launch strategies may emerge, as these (Stummer et al. 2018) have so far paid little attention to the satisfaction of ecosystem actors.
With respect to the DSR paradigm, our study demonstrates how the organizational logic of echelons can be used to structure and communicate research projects consisting of multiple sub-studies and intermediate artifacts. Our study can serve as a blueprint for transparent communication of a DSR project that de facto did not follow the linear DSRM process, but started with an uncertain outcome and first accumulated a knowledge base to sufficiently understand the problem and iteratively adjust the design goal. These knowledge accumulation and learning processes are expressed in the macro-level iterations in Fig. 2 in Sect. 2 on research design, which by their nature cannot be planned. They led to multiple sub-studies and intermediate artifacts at multiple levels: Objective, design, demonstration, and evaluation. In particular, this demonstrates the legitimizing effect of echeloned DSR for DSR projects that are subject to natural disorder and cannot be subordinated to the linear DSRM approach. This legitimizing effect is similar to the legitimizing papers on the representation of DSR (Gregor and Hevner 2013) or the memorandum on design-oriented information systems research (Österle et al. 2011).
We also make minor contributions to two other research areas. First, we complement recent findings on the benefits of lead complementor involvement (Weiss et al. 2023) by contributing a tool set (e.g., CIT) to establish new ways of collaborating with lead complementors. Second, through our empirical context of the Siemens MindSphere ecosystem, our work supports research on the complex path from industry incumbents to platform providers (Svahn et al. 2017; Weiss et al. 2020; Marheine et al. 2021).
7.2 Practical Implications
For IIoT platform providers, providing BRs of satisfactory quality is a wicked problem due to the large number and variety of BRs in the IIoT domain, the architectural complexity of IIoT applications, and differing BR quality perceptions among diverse complementor types (Petrik and Herzwurm 2020a; Pauli et al. 2021; Arnold et al. 2022). This necessitates a more individualized relationship between IIoT platform providers and complementors (Marheine et al. 2021; Hurni et al. 2021), which requires a close exchange during the development of complements (Pauli et al. 2021; Stoiber and Schönig 2022). To address this practical problem, the method presented here guides platform providers in the IIoT to continuously (re)design BRs based on their impact on complementor satisfaction. The method consists of concrete steps with feasible, ready-to-use techniques such as CIT and SERVIMPERF. As demonstrated by the web-based application prototype, the method can be seamlessly integrated into the natural context of a platform provider’s software-based platform governance. The positive effects enabled by the designed method are in line with the reasoning of Foerderer et al. (2019) and Weiss et al. (2023), who recognize that platform providers should invest in collaborative exchanges with complementors, especially lead complementors, in addition to investing in platform functionality and the (re)design of BR portfolios.
Based on the regular collection and analysis of complementor feedback, the method helps to understand problems experienced by complementors in specific situations and to condense them into QDs. This provides an opportunity to gain new insights into the optimization of both technical and non-technical BRs, highlighting factors that are important to complementors that may not have been considered by platform providers. It also highlights those BRs that underperform qualitatively despite their high relevance to complementors. Overall, the method guides platform managers in dealing with the multi-criteria decision-making involved in prioritizing the BR quality improvement actions that are most relevant to complementor satisfaction. Thus, the method addresses the problem caused by the lack of systematic approaches to manage quality at the BR level (Schüler and Petrik 2021), extending existing API management practices (Melville and Kohli 2021) toward holistic monitoring of diverse BR portfolios (Dal Bianco et al. 2014). Platform managers, acting as facilitators, can coordinate insights with other departments responsible for the design of specific BRs to address BR quality deficiencies. In this way, platform providers can achieve better alignment between partner management and development, and move from one-time assessments of complementors to ongoing monitoring that can detect early shifts in complementor needs. As a result, the method helps platform providers manage the tensions of continuously evolving BR portfolios, preventing complementors from deserting a platform but rather encouraging them to maximize their engagement.
7.3 Limitations and Future Work
Due to certain design choices, our artifact is subject to some limitations, which opens the way for new research opportunities.
The evaluation provided valuable feedback on the relevance and usefulness of the method in the IIoT domain, for which platform providers it is best suited, and where its limitations lie. However, certain aspects remained open, such as possible variations of the method to suit different types of complementors or BR lifecycle stages. Furthermore, although BRs are used in contexts other than the IIoT (Svahn et al. 2017; Wulfert 2023), we do not claim the applicability of our method beyond the boundaries of the IIoT. Platforms outside the IIoT may differ in the design of BRs and other influential parameters, such as competition with other ecosystem participants, which may affect complementor satisfaction (Engert et al. 2023). Future research could address the above open aspects and also evaluate the applicability of our method beyond the IIoT to increase the method’s generalizability.
Future (action) research could implement the method with its accompanying prototype in the operations of some platform providers in the field. Longitudinal multiple case studies would allow some variables of the method to be manipulated in the field and its performance to be assessed over time. For example, a modification of the method could incorporate and monitor the practices of collaborative BR development between platform providers and lead complementors, while exploring cross-firm BR design routines and their pitfalls (O’Mahony and Karp 2020; Weiss et al. 2023). In line with this, the prototype could serve as a blueprint for the subsequent development of computer-aided quality management for BR. In this way, a single point of truth for all BRs could be established for decision makers in platform companies, advancing structured ecosystem management in practice and research (Jacobides et al. 2018).
Given the implications of signaling theory, the effectiveness of our method may depend on the communication capabilities of platform providers. This is particularly relevant in the IIoT context, where information asymmetries exist and complementors are reluctant to provide feedback to platform providers (Chase and Murtha 2019; Pauli et al. 2021). Establishing regular feedback loops and linking them to BR (re)design could be connected to the capability view of the firm, potentially influencing the success of the artifact. Future platform research could apply the theory of IS capabilities (Tan et al. 2015) or the theory of dynamic capabilities, both of which have been identified as relevant to the evolution of an innovation platform (Haki et al. 2024), and focus specifically on BR (re)design.
8 Conclusion
Creating and maintaining complementor engagement is not a trivial task for platform providers facing inter-platform competition. Using the empirical context of the IIoT domain, we design, demonstrate, and evaluate a method to align the (re)design of BRs with complementor satisfaction, ultimately leading to increased complementor engagement. To this end, service quality techniques are used to obtain complementor feedback on the perceived quality of BRs, assess the issues and their impact on complementor satisfaction, and prioritize quality measures that contribute most to complementor satisfaction. The method is theoretically grounded in the Expectation-Confirmation Theory (Oliver 1980; De Ruyter et al. 1997) and draws on rich empirical data from the Siemens MindSphere ecosystem. Using echeloned DSR, this study accumulates design knowledge from multiple studies and iterations. The method is instantiated as a prototypical web application to help platform providers collect and process feedback through an online questionnaire, dashboard visualizations, and automatically generated reports. The practical relevance and application boundaries of the method were evaluated in twelve focus groups with naturalistic users. This research aids platform providers by presenting a holistic and iterative approach that helps align BR (re)design with complementor satisfaction and shows with empirical data how to apply it. It also helps platform scholars by enriching design-oriented research on BRs with a new perspective on platform governance research.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The research project is the first author’s doctoral thesis. Preliminary empirical results from the design science project’s early stages were presented at conferences and are cited here where appropriate.
At the time of publication, the revised prototype is available as a web-based application at the following URL: https://br-satisfaction-management.de
Arnold L, Jöhnk J, Vogt F, Urbach N (2022) IIoT platforms’ architectural features – a taxonomy and five prevalent archetypes. Electron Mark 32:927–944. https://doi.org/10.1007/s12525-021-00520-0CrossRef
Baskerville R, Baiyere A, Gregor S, Hevner A, Rossi M (2018) Design science research contributions: finding a balance between artifact and theory. J Assoc Inf Syst 19(5):358–376. https://doi.org/10.17705/1jais.00495CrossRef
Bender B (2020) The impact of integration on application success and customer satisfaction in mobile device platforms. Bus Inf Syst Eng 62(6):515–533. https://doi.org/10.1007/s12599-020-00629-0CrossRef
Ceccagnoli M, Forman C, Huang P, Wu DJ (2012) Cocreation of value in a platform ecosystem! The case of enterprise software. MIS Q 36(1):263–290. https://doi.org/10.2307/41410417CrossRef
Constantinides P, Henfridsson O, Parker GG (2018) Introduction— Platforms and infrastructures in the digital age. Inf Syst Res 29(2):381–400. https://doi.org/10.1287/isre.2018.0794CrossRef
Dal Bianco V, Myllärniemi V, Komssi M, Raatikainen M (2014) The role of platform boundary resources in software ecosystems: a case study. In: Proceedings of the 11th IEEE/IFIP conference on software architecture, Sydney, pp 11–20
De Ruyter K, Bloemer J, Peeters P (1997) Merging service quality and service satisfaction; an empirical test of an integrative model. J Econ Psychol 18:387–406. https://doi.org/10.1016/S0167-4870(97)00014-7CrossRef
Engert M, Evers J, Hein A, Krcmar H (2022) The engagement of complementors and the role of platform boundary resources in e-commerce platform ecosystems. Inf Syst Front 24:2007–2025. https://doi.org/10.1007/10796-021-10236-3CrossRef
Engert M, Evers J, Hein A, Krcmar H (2023) Sustaining complementor engagement in digital platform ecosystems: antecedents, behaviours and engagement trajectories. Inf Syst J. https://doi.org/10.1111/isj.12438CrossRef
Ghazawneh A, Henfridsson O (2010) Governing third-party development through platform boundary resources. In: Proceedings of the 31st ICIS, St. Louis
Ghazawneh A, Henfridsson O (2013) Balancing platform control and external contribution in third-party development: the boundary resources model. Inf Syst J 23(2):173–192. https://doi.org/10.1111/j.1365-2575.2012.00406.xCrossRef
Gregor S, Chandra Kruse L, Seidel S (2020) Research perspectives: the anatomy of a design principle. J Assoc Inf Syst 21(6):1622–1652. https://doi.org/10.17705/1jais.00649CrossRef
Haki K, Blaschke M, Aier S, Winter R, Tilson R (2024) Dynamic capabilities for transitioning from product platform ecosystem to innovation platform ecosystem. Eur J Inf Syst 33(2):181–199. https://doi.org/10.1080/0960085X.2022.2136542CrossRef
Hayes BE (2008) Measuring customer satisfaction and loyalty. Survey design, use, and statistical analysis methods. Quality Press, Milwaukee
Heimburg V, Wiesche M (2022) Relations between actors in digital platform ecosystems: a literature review. In: Proceedings of the 30th ECIS, Timisoara
Hein A, Weking J, Schreieck M, Wiesche M, Böhm M, Krcmar H (2019) Value co-creation practices in business-to-business platform ecosystems. Electron Mark 29(3):503–518. https://doi.org/10.1007/s12525-019-00337-yCrossRef
Hill N, Alexander J (2000) Handbook of customer satisfaction and loyalty measurement, Gower
Hurni T, Huber TL, Dibbern J, Krancher O (2021) Complementor dedication in platform ecosystems: rule adequacy and the moderating role of flexible and benevolent practices. Eur J Inf Syst 30(3):237–260. https://doi.org/10.1080/0960085X.2020.1779621CrossRef
ISO/IEC 25010 (2011) Systems and software engineering – systems and software quality requirements and evaluation (SQuaRE) – system and software quality models, Geneva
Jacobides MG, Cennamo C, Gawer A (2024) Externalities and complementarities in platforms and ecosystems: from structural solutions to endogenous failures. Res Policy 53(1):104906. https://doi.org/10.1016/j.respol.2023.104906CrossRef
Karhu K, Gustafsson R, Lyytinen K (2018) Exploiting and defending open digital platforms with boundary resources: android’s five platform forks. Inf Syst Res 29(2):479–497. https://doi.org/10.1287/isre.2018.0786CrossRef
Karhu K, Heiskala M, Ritala P, Thomas LD (2024) Positive, negative, and amplified network externalities in platform markets. Acad Manag Perspect 38(3):349. https://doi.org/10.5465/amp.2023.0119CrossRef
Kauschinger M, Schreieck M, Böhm M, Krcmar H (2021) Knowledge sharing in digital platform ecosystems – A textual analysis of SAP’s developer community. In: Wirtschaftsinformatik 2021 proceedings
Marheine C, Engel C, Back A (2021) How an incumbent telecoms operator became an IoT ecosystem orchestrator. MIS Q Exec 20(4):297–314. https://doi.org/10.17705/2msqe.00055CrossRef
Mark G, Lyytinen K, Bergman M (2007) Boundary objects in design: an ecological view of design artifacts. J Assoc Inf Syst 8(11):546–568. https://doi.org/10.17705/1jais.00144CrossRef
Melville NP, Kohli R (2021) Models for API value generation. MIS Q Exec 20(2):151CrossRef
Miguel JP, Mauricio D, Rodríguez G (2015) A review of software quality models for the evaluation of software products. Int J Softw Eng Appl 5(6):31–54. https://doi.org/10.5121/ijsea.2014.5603CrossRef
Mosch P, Majocco P, Obermaier R (2023) Contrasting value creation strategies of industrial-IoT-platforms – a multiple case study. Int J Prod Econ 263:108937. https://doi.org/10.1016/j.ijpe.2023.108937CrossRef
Nolte A, Pe-Than EPP, Filippova A, Bird C, Scallen S, Herbsleb JD (2018) You hacked and now what? Exploring outcomes of a corporate hackathon. In: Proceedings of the ACM on human-computer interaction 2 (CSCW). https://doi.org/10.1145/3274398
O’Mahony S, Karp R (2020) From proprietary to collective governance: How do platform participation strategies evolve? Strat Manag J 43(3):530–562. https://doi.org/10.1002/smj.3150CrossRef
Oberländer AM, Röglinger M, Rosemann M, Kees A (2018) Conceptualizing business-to-thing interactions – a sociomaterial perspective on the Internet of Things. Eur J Inf Syst 27(4):486–502. https://doi.org/10.1080/0960085X.2017.1387714CrossRef
Oliver RL (1980) A cognitive model of the antecedents and consequences of satisfaction decisions. J Mark Res 17(4):460–469. https://doi.org/10.2307/3150499CrossRef
Österle H, Becker J, Frank U, Hess T, Karagiannis D, Krcmar H, Loos P, Mertens P, Oberweis A, Sinz EJ (2011) Memorandum on design-oriented information systems research. Eur J Inf Syst 20(1):7–10. https://doi.org/10.1057/ejis.2010.55CrossRef
Parker G, van Alstyne M, Jiang X (2017) Platform ecosystems: How developers invert the firm. MIS Q 41(1):255–266CrossRef
Pauli T, Marx E, Fielt E, Marheine C, Matzner M (2025) Inhibitors and enablers of leverage in digital industrial platform ecosystems. Bus Inf Syst Eng. https://doi.org/10.1007/s12599-024-00896-1CrossRef
Peffers K, Tuunanen T, Niehaves B (2018) Design science research genres: Introduction to the special issue on exemplars and criteria for applicable design science research. Eur J Inf Syst 27(2):129–139. https://doi.org/10.1080/0960085X.2018.1458066CrossRef
Petrik D, Schüler F, Springer V, Fiebich M, Kretzschmar K, Herzwurm G (2021) IoT-plattformökosystemanalyse am Beispiel von amazon web services IoT. Wirtschaftsinform Manag 13(2):100–109. https://doi.org/10.1365/s35764-021-00327-wCrossRef
Petrik D, Herzwurm G (2019) IIoT ecosystem development through boundary resources: a Siemens MindSphere case study. In: Proceedings of the 2nd IWSIB, Tallinn. https://doi.org/10.1145/3340481.3342730
Petrik D, Herzwurm G (2020a) Boundary resources for IIoT platforms – a complementor satisfaction study. In: Proceedings of the 41st ICIS, Hyderabad
Petrik D, Herzwurm G (2020b) Complementor satisfaction with boundary resources in IIoT ecosystems. In: Proceedings of the 23rd international conference on business information systems. Colorado Springs, pp 351–366. https://doi.org/10.1007/978-3-030-53337-3_26
Petrik D, Herzwurm G (2020c) Towards the iIoT ecosystem development – understanding the stakeholder perspective. In: Proceedings of the 28th ECIS, Marrakesh
Pries-Heje J, Baskerville R, Venable JR (2008) Strategies for design science research evaluation In: Proceedings of the 16th ECIS, Galway
Rosemann M, Vessey I (2008) Toward improving the relevance of information systems research to practice: the role of applicability checks. MIS Q 32(1):1–22. https://doi.org/10.2307/25148826CrossRef
Sarker S, Sarker S, Sahaym A, Bjorn-Andersen N (2012) Exploring value cocreation in relationships between an ERP vendor and its partners: a revelatory case study. MIS Q 36(1):317–338. https://doi.org/10.2307/41410419CrossRef
Schüler F, Petrik D (2023) Measuring network effects of digital industrial platforms: towards a balanced platform performance management. Inf Syst E-Bus Manag 21:863–911. https://doi.org/10.1007/s10257-023-00655-xCrossRef
Star SL, Griesemer JR (1989) Institutional ecology, ‘translations’ and boundary objects: amateurs and professionals in Berkeley’s museum of vertebrate zoology, 1907–39. Soc Stud Sci 19(3):387–420CrossRef
Stoiber C, Schönig S (2022) Improving business processes with the Internet of Things – a taxonomy of IIoT applications. In: Proceedings of the 30th ECIS, Timisoara
Svahn F, Mathiassen L, Lindgren R (2017) Embracing digital innovation in incumbent firms: how Volvo cars management competing concerns. MIS Q 41(1):239–253CrossRef
Tan B, Pan SL, Lu X, Huang L (2015) The role of IS capabilities in the development of multi-sided platforms: the digital ecosystem strategy of Alibabacom. J Assoc Inf Syst 16(4):248–280. https://doi.org/10.17705/1jais.00393CrossRef
Tiwana A, Konsynski B, Bush AA (2010) Research commentary – platform evolution: coevolution of platform architecture, governance, and environmental dynamics. Inf Syst Res 21(4):675–687. https://doi.org/10.1287/isre.1100.0323CrossRef
Töpfer A, Mann A (2008) Kundenzufriedenheit als basis für unternehmenserfolg. In: Töpfer A (ed) Handbuch kundenmanagement. Springer, Cham, pp 37–80CrossRef
Tremblay MC, Hevner AR, Berndt DJ (2010) Focus groups for artifact refinement and evaluation in design research. Commun Assoc Inf Syst 26(1):599–618. https://doi.org/10.17705/1CAIS.02627CrossRef
Tuunanen T, Winter R, vom Brocke J (2024) Dealing with complexity in design science research – a methodology using design echelons. MIS Q 48(2):427–458. https://doi.org/10.25300/MISQ/2023/16700CrossRef
Venable J, Pries-Heje J, Baskerville R (2012) A comprehensive framework for evaluation in design science research. In: Peffers K et al (eds) Design science research in information systems. Advances in theory and practice. Springer, Heidelberg, pp 423–438
Weiss N, Wiesche M, Schreieck M, Krcmar H (2020) Learning to be a platform owner: how BMW enhances app development for cars. IEEE Trans Eng Manag 69(6):4019–4035. https://doi.org/10.1109/TEM.2020.3017051CrossRef
Weiss N, Schreieck M, Wiesche M, Krcmar H (2023) Lead complementor involvement in the design of platform boundary resources: a case study of BMW’s onboard apps. Inf Syst J 33(6):1279–1311. https://doi.org/10.1111/isj.12449CrossRef
Wulf J, Blohm I (2020) Fostering value creation with digital platforms: a unified theory of the application programming interface design. J Manag Inf Syst 37(1):251–281. https://doi.org/10.1080/07421222.2019.1705514CrossRef
Wulfert T, Woroch R, Strobel G, Seufert S, Möller F (2022) Developing design principles to standardize e-commerce ecosystems: a systematic literature review and multi-case study of boundary resources. Electron Mark 32(4):1813–1842. https://doi.org/10.1007/s12525-022-00558-8CrossRef
Yoo Y, Henfridsson O, Lyytinen K (2010) Research commentary — The new organizing logic of digital innovation: an agenda for information systems research. Inf Syst Res 21(4):724–735. https://doi.org/10.1287/isre.1100.0322CrossRef
Zapadka P, Handelt A, Firk S (2022) Digital at the edge – antecedents and performance effects of boundary resource deployment. J Strateg Inf Syst 31:101708. https://doi.org/10.1016/j.jsis.2022.101708CrossRef
Zeithaml VA, Berry LL, Parasuraman A (1988) Communication and control processes in the delivery of service quality. J Mark 52(2):35–48CrossRef