Skip to main content
Erschienen in: Business & Information Systems Engineering 3/2022

Open Access 01.07.2022 | Research Paper

When Self-Humanization Leads to Algorithm Aversion

What Users Want from Decision Support Systems on Prosocial Microlending Platforms

verfasst von: Pascal Oliver Heßler, Jella Pfeiffer, Sebastian Hafenbrädl

Erschienen in: Business & Information Systems Engineering | Ausgabe 3/2022

Aktivieren Sie unsere intelligente Suche um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Decision support systems are increasingly being adopted by various digital platforms. However, prior research has shown that certain contexts can induce algorithm aversion, leading people to reject their decision support. This paper investigates how and why the context in which users are making decisions (for-profit versus prosocial microlending decisions) affects their degree of algorithm aversion and ultimately their preference for more human-like (versus computer-like) decision support systems. The study proposes that contexts vary in their affordances for self-humanization. Specifically, people perceive prosocial decisions as more relevant to self-humanization than for-profit contexts, and, in consequence, they ascribe more importance to empathy and autonomy while making decisions in prosocial contexts. This increased importance of empathy and autonomy leads to a higher degree of algorithm aversion. At the same time, it also leads to a stronger preference for human-like decision support, which could therefore serve as a remedy for an algorithm aversion induced by the need for self-humanization. The results from an online experiment support the theorizing. The paper discusses both theoretical and design implications, especially for the potential of anthropomorphized conversational agents on platforms for prosocial decision-making.
Hinweise

Supplementary Information

The online version contains supplementary material available at https://​doi.​org/​10.​1007/​s12599-022-00754-y.
Accepted after one revision by the editors of the special issue.

1 Introduction

Decision support systems are becoming faster, smarter, and more powerful by the minute, and thus it is for good reason that they can be found on just about any successful internet platform in the form of recommendation systems, conversational agents, or interactive decision aids (Aggarwal 2016; Jung et al. 2018; Maedche et al. 2019; Pfeiffer et al. 2014). However, as these decision support systems spread to more and more domains of life, the question arises as to what extent users are willing to use them in every context. Indeed, while algorithms are being rapidly adopted in some contexts, prior research has shown that people are often algorithm averse (Castelo et al. 2019; Dietvorst et al. 2015) and that they might prefer the support of another human (Dietvorst et al. 2015; Sinha and Swearingen 2001; Yeomans et al. 2019)—for instance, if they perceive a task to be more subjective and thus requiring intuition as well as personal interpretation (Castelo et al. 2019; Inbar et al. 2010). In a related stream of research, Seeger et al. (2021) proposed that some tasks are more human-like, meaning that the support system is substituting for a human interaction partner and that this might affect users’ expectations of the system’s design. Overall, given the huge potential of decision support systems to facilitate and improve decision-making, there is continued interest in the question about context-specific reasons for algorithm aversion, both theoretically, as such reasons are closely tied to a deep understanding of its underlying mechanisms, and practically, with an eye toward building context-specific remedies to overcome this bias.
In this paper, we address this question by building on the theoretical framework of self-humanization as a particularly suitable conceptual lens for explaining contextual differences in algorithm aversion. The central tenet of this framework is that people want to be seen by others, and to see themselves, as fully human (Haslam et al. 2005). In order to feel human, people place great importance on using abilities that have been called human nature attributes. They believe these attributes cannot be shared with machines—think, for example, of emotional responsiveness, interpersonal warmth, agency, cognitive openness, and depth (Haslam 2006). The main thesis of this paper is thus that in decision contexts where people see such human nature attributes as particularly important, they become algorithm averse and would prefer to be supported by a human (as humans have these attributes, but algorithms cannot possess them (Haslam 2006)). Specifically, we propose that the underlying reasons that make people averse to algorithms in contexts they deem relevant for self-humanization are two facets of self-humanization: the importance of empathy and autonomy.
In addition to this theoretical contribution, we consider the practical implications of our research model and propose decision support systems that imitate human-like characteristics as a remedy for humanization-induced algorithm aversion. We call such decision support systems human-like decision support. Imagine anthropomorphized conversational agents who, using natural language, emulate human-to-human communication (Maedche et al. 2019; Schuetzler et al. 2014; Seeger et al. 2021), or consider the applications of neurophysiological measurements for making communication between humans and computers emotionally richer (Picard 2003; Zheng and Lu 2015). Or contemplate the attempts made to compel black box artificial intelligence algorithms to explain their decisions to the user (Adadi and Berrada 2018; Barredo Arrieta et al. 2020). Such decision support systems are not only rising in popularity; they also prompt the user to ascribe human nature attributes to them.
One decision context for which human nature attributes are seen as particularly important is that of prosocial decisions, defined as decisions to benefit others. People decide to help others in need, volunteer for good causes, and give money to charities. Prior research has shown that two factors stemming from these human nature attributes are particularly relevant for prosocial decisions: empathy and autonomy. Indeed, rather than trying to rationally find the option that produces the maximal benefit to others (or, more generally, the maximal welfare gain) or delegating their decision to an algorithm that could approximate such rationality, people often prefer to actively and autonomously choose options aligned with their own subjective preferences (Berman et al. 2018). They aim to select options that feel right (i.e., that give them a warm glow (Andreoni 1990; Dunn et al. 2014)) and that allow them to experience empathy with the beneficiary (Galak et al. 2011; Loewenstein and Small 2007). Despite these rather peculiar characteristics of the prosocial decision context, people increasingly use digital platforms to engage in prosocial behavior (e.g., Galak et al. 2011). To the best of our knowledge, no previous research has used the lens of self-humanization to illuminate the context of prosocial decisions with the aim of exploring how it explains algorithm aversion and developing domain-specific remedies.
From a research design perspective, prosocial decisions are also a particularly well-suited context for studying self-humanization and algorithm aversion. One type of prosocial platform, prosocial microlending, has a for-profit counterpart in the form of regular for-profit microlending platforms. On both types of platforms, users select entrepreneurs, with the main difference that on one platform, the users receive no interest payments and follow prosocial motives (Galak et al. 2011; Haas et al. 2014), whereas on the other, they want to make money (Haas et al. 2014). Comparing for-profit with prosocial microlending decisions allows us to change the context (and thereby the relevance of self-humanization) while keeping most elements of the decision process constant and thus to isolate the effect of the decision context as thoroughly as possible. Specifically, in an online experiment, we manipulated the relevance of self-humanization by randomly assigning participants to make decisions either on a for-profit or on a prosocial microlending platform.
Our experiment provides evidence supporting our research model and thus the hypothesized causal relationship between factors that have rarely been studied together and are important for understanding contextual differences in algorithm aversion. We thereby make three main contributions: First, we build on the theoretical lens of self-humanization to understand how differences between decision contexts (and with them different types of platforms) affect algorithm aversion. Second, we propose and test the two main mechanisms of how self-humanization drives this context-specific algorithm aversion: the importance the user gives to autonomy and the importance the user gives to empathy. Third, we explore the practical implications for how this context-specific algorithm aversion can be remedied: by making the decision support system appear more human-like. This solution obviously has direct implications for the designers of decision support systems in different contexts. Creating a decision support system based on these ideas carries the promise of not only satisfying users’ desire to feel more human, but also of reinforcing prosocial behavior. Overall, our results strongly support the idea that decision support systems cannot merely be copied and pasted between contexts, but need to be thoroughly adapted to users’ preferences and expectations to prevent and overcome algorithm aversion.

2 Theory

2.1 Algorithm Aversion

Algorithms have long been proposed as a means to overcome the cognitive limitations of humans (Burton et al. 2020; Dawes 1979; Meehl 1954). Indeed, several studies in different contexts have shown that algorithms can and do outperform humans, for example, in forecasting tasks (Grove et al. 2000) and supply chain distribution (Validi et al. 2015). While some form of algorithm appreciation seems to exist in some domains (Logg et al. 2019; Prahl and van Swol 2017), in many contexts, people seem to be intuitively averse to using them, a phenomenon that was termed “algorithm aversion” by Dietvorst et al. (2015).
One initial focus of this research was users’ high expectations concerning the performance of algorithms: they expect them to be perfect. Consequently, people quickly lose trust in algorithms once they see them err (Dietvorst et al. 2015, 2018). Of course, predicting the future perfectly is inherently difficult; thus, even extremely well-crafted algorithms will err from time to time (Dietvorst et al. 2015; Prahl and van Swol 2017). However, there are also cases in which people did not observe the algorithm, thus they could not learn about algorithmic failures (e.g., Longoni et al. 2019), and yet they still felt algorithm aversion. Taking the breadth of the phenomenon into account, we follow Jussupow et al. (2020) and define algorithm aversion as the “biased assessment of an algorithm which manifests in negative behaviors and attitudes towards the algorithm compared to a human agent” (p. 4).
Algorithm aversion is an umbrella term, and there are several different reasons underlying this biased assessment. In recent literature reviews, these different causes have been discussed and categorized (see, for example, Burton et al. (2020) and Jussupow et al. (2020)). Let us give a few examples: We already mentioned the expectation that an algorithm should work perfectly (Dietvorst et al. 2015, 2018), which fits into the larger category of users’ beliefs about what an algorithm is capable of, and which might be driven by a user’s domain-specific expertise—with experts often showing higher degrees of algorithm aversion. Relatedly, humans make decisions differently from the way computers do (cognitive compatibility), for instance, by using heuristics, which are simple decision strategies that ignore part of the available information (Hafenbrädl et al. 2016; Hoffrage et al. 2018). Most of the time humans act in a world of uncertainty where not all possible consequences (and their probabilities) of a decision are known or knowable (Neth and Gigerenzer 2015), whereas typically algorithms optimize under risk, which means they implicitly assume that they know all outcomes and probabilities (divergent rationalities). Moreover, the category of decision autonomy describes the feeling of being in control, which could be diminished if one cannot understand how an algorithm actually makes decisions, or if one cannot influence and control how an algorithm makes the decision. As these examples illustrate, there are many different categories of causes for algorithm aversion; it comes in many flavors, forms, and functions. Some of these causes are driven not only by features of the algorithms themselves, but also by features of the context in which the algorithms are used (Castelo et al. 2019).

2.2 Self-Humanization

One theoretical dimension that seems particularly relevant for explaining contextual differences in algorithm aversion stems from the theoretical framework of self-humanization (Haslam 2006). Haslam et al. (2005) proposed that people want to be seen by others, and to see themselves, as fully human—to the point where they see themselves as more human than others. There are two distinct senses of humanness that contribute to being seen as fully human (Haslam 2006). First, uniquely human attributes distinguish humans from animals, although despite being labeled uniquely human (e.g., cognitive capabilities, like logic, and rationality), they can be shared with machines. Second, and more importantly for explaining algorithm aversion, human nature attributes comprise attributes that people (across cultures) believe cannot be shared with machines (although potentially with animals), and thus, by extension, with algorithms and decision support systems (Haslam et al. 2008; Kahn et al. 2006). In his review paper, Haslam (2006) proposed five categories of human nature attributes: emotional responsiveness, interpersonal warmth, agency, cognitive openness, and depth. Prior research has found that people assess themselves (relative to others) to more strongly embody human nature attributes, especially openness, warmth, and emotionality (Haslam et al. 2005). Moreover, they also want to see themselves, and be seen by others, as possessing human nature attributes, which are perceived as more important and more deeply rooted in the individual, relative to uniquely human attributes (Bain et al. 2006; Haslam et al. 2000, 2004).
Contexts differ widely in their relevance, their ability to be diagnostic, and their affordances for embodying human nature attributes. For instance, contexts that prompt people to focus on and prioritize making money have been found to decrease self-humanization, particularly in terms of human nature attributes (e.g., Ruttan and Lucas 2018). In contrast, prosocial decisions, such as helping a friend in need or donating to charity, have humanization at their front and center; the very act of engaging in prosocial decisions in the first place is unique to human nature. Making prosocial decisions allows people to embody their own human nature attributes and, in consequence, to be seen as and to feel more human.
More generally, following Ruttan and Lucas (2018), who built on Schwartz’s circumplex model of human goals and values (Schwartz 1992, 2013), human nature attributes can be mapped onto self-transcendence values (values that promote the welfare of others, benevolence, interconnectedness, and emotionality). The fact that these self-transcendent values form an antagonistic relationship with self-enhancement values (values that promote the self, such as wealth or power) can explain the negative relationship between money prioritization and human nature attributes. In other words, different contexts activate different goals (self-transcendence versus self-enhancement), and self-transcendent goals map onto human nature attributes, while self-enhancement goals suppress human nature attributes. We propose that this activation and suppression of human nature attributes, which stand in fundamental tension with the use of machines, algorithms, and computerized decision support systems, is a driver of contextual differences in algorithm aversion.

2.3 Overcoming Algorithm Aversion with Human-like Decision Support

In sum, the theoretical lens of self-humanization highlights that in contexts that people deem relevant for their self-humanization, there is a fundamental tension between human nature attributes on the one hand and using machines, algorithms, and computerized decision support systems on the other. Yet, when it comes to decisions on digital platforms, the sheer number of possibilities on many platforms can be overwhelming, and users long for ways to reduce the decision effort (e.g., Häubl and Trifts 2000). In principle, this renders the superior capabilities of algorithms to screen and integrate large amounts of information very attractive. The question arises whether it is possible, and if so, how, to make algorithm-based decision support more palatable to decision-makers in contexts in which they experience self-humanization-driven algorithm aversion.
The theoretical lens of self-humanization not only allows for understanding the underlying reasons for algorithm aversion in such contexts but also points to a potential solution: create the impression that the decision support is more human-like (and less machine-like). The intuitive classification of ways to support decisions in human-like and computer-like decision support stems from Seeger et al. (2021) and their concept of human-like versus computer-like tasks in the context of conversational agents. Human-like decision tasks are tasks in which a conversational agent is substituted for a human interaction partner (Lankton et al. 2015). These are tasks that are typical for a human (Seeger et al. 2021). We adapted the definition of human-like versus computer-like tasks from Seeger et al. (2021) and tailored it to decision support systems: human-like decision support refers to a decision support system that has characteristics that are typical for a human (e.g., possessing human nature attributes). As there is not necessarily a clear separation between these types of decision support systems, they can be placed on a continuum (Lankton et al. 2015).
There are multiple ways to dress up a decision support system to make it come across as more human-like. One prominent approach relies on anthropomorphization, which literally means humanizing (Epley et al. 2007), for example, through the use of social cues (Gnewuch et al. 2017; Seeger et al. 2021). A further way to create more human-like decision support systems might be to let computers simulate emotions, which is a burgeoning research area in computer science (e.g., affective computing). Decision support systems could try to give the user the impression that the computer has feelings by letting the computer detect emotions in both users and loan recipients with algorithms (Swangnetr and Kaber 2013). For example, the decision support system could try to infer emotions from the recipient’s picture (Garcia-Garcia et al. 2017) or from text using sentiment analysis (Yadollahi et al. 2017), and a virtual agent might even be able to assume different facial expressions (Gordon et al. 2019).1

2.4 For-Profit versus Prosocial Decision Contexts

One straightforward operationalization of such contextual differences in the relevance of human nature attributes is to compare and contrast for-profit with prosocial decision-making contexts. For-profit decisions are defined as decisions people make to make money (e.g., interest payments from entrepreneurs), whereas prosocial decisions are defined as decisions that people make for the benefit of others (Eisenberg and Miller 1987).
We chose the domain of microlending decisions because there are for-profit and prosocial versions of microlending platforms, which creates a natural comparison that allows us to experimentally manipulate the context of a digital platform as cleanly as possible. Microlending itself is a relatively new financial instrument for providing entrepreneurs with small loans when traditional sources of financing may be unobtainable for them, for instance, due to their lack of collateral (Allison et al. 2013; Bruton et al. 2011). Peer-to-peer online platforms feature an emerging form of microlending that allows individuals to select entrepreneurs on the basis of the information contained in investment profiles, for instance, on for-profit microlending platforms like Prosper, FundedByMe, and Wisefund and on prosocial platforms like KIVA, GoFundMe, and Lend for Peace.
Because of for-profit decisions’ strong focus on making money (Haas et al. 2014), the challenge of decision-making in such a context amounts to making good inferences about which loans will likely be paid back on time or even be paid back at all (Moss et al. 2015). In prosocial microlending, in contrast, lenders want to help someone in need (e.g., small business owners in developing countries) by lending them money interest-free (e.g., Allison et al. 2015; Galak et al. 2011). Prior research has found that in prosocial contexts, people do not think (or at least they act as if they do not think) that options can be objectively ranked (Berman et al. 2018), and thus they believe that there is no objectively best option that would likely have the largest positive impact on social welfare overall (Caviola et al. 2020). Consequently, people prefer to base their decisions on more subjective factors—which are typically related to and driven by the abilities of human nature described above (e.g., their experience of empathy). Relying on their human nature capability to connect with the beneficiary in prosocial microlending thus provides an ideal contrast to the clearly defined criteria that lend themselves to rational optimization by machines in for-profit microlending (Bruton et al. 2011; Moss et al. 2015).

2.5 Human Nature Attributes, Empathy, and Autonomy

What are the implications of the particular relevance of human nature attributes for explaining contextual differences in algorithm aversion? The first factor stemming from these human nature attributes that is particularly relevant for prosocial decision-making is empathy, which is defined as the ability to take the emotional perspective of someone else—feeling as others—and includes the feeling of sympathy—feeling for others (Batson 2014; Cuff et al. 2016; Davis 1983; Loewenstein and Small 2007). Emotions in general, and empathy in particular, play a crucial role when making prosocial decisions (Barasch et al. 2014; Berman et al. 2018; Caviola et al. 2020). For instance, in the process of scrutinizing potential recipients of prosocial lending—that is, browsing through a list of entrepreneurs in need—people will emotionally react to photos and individual stories and often ultimately make their decisions based on this empathic reaction (Barasch et al. 2014; Eisenberg and Miller 1987; Herzenstein et al. 2011). Prompting people to adopt a more deliberative information processing approach (thus reducing their reliance on empathy) has been found to lower donations for recipients (Dickert et al. 2011; Small et al. 2007). Galak et al. (2011) provided evidence that in prosocial decisions, because similarity reduces social distance and facilitates empathy, people spend more money to help those who are more similar to themselves. More broadly, feelings of empathy and sympathy (as well as emotions such as fear, guilt, pity—cf. Sargeant et al. (2006) and regret—Martinez et al. (2011)) feature prominently among the factors that influence how much people are willing to give (Galak et al. 2011; Hamilton and Sherman 1996; Pavey et al. 2012).
The second factor stemming from these human nature attributes that is particularly relevant for prosocial decision-making is autonomy. In short, to perceive a decision as reflecting their human nature attributes, people would have to be in the driver’s seat, making the decision themselves. Most definitions of autonomy have notions of free choice and self-determination in common (André et al. 2018; Christman 2020; Deci and Ryan 2000; Ryan and Connell 1989; Wertenbroch et al. 2020). For example, Janiesch et al. (2019) posited: “In general, autonomy describes an entity’s or agent’s ability to act independently and self-determined” (p. 164). Longstanding research traditions in psychology have established autonomy as a fundamental human need (Christman 2020; Deci and Ryan 2000). For instance, in self-determination theory (Deci and Ryan 1985, 2000), the autonomy a person experiences while engaging in a task is a central driver of the intrinsic motivation for performing that task.
In for-profit microlending decisions, however, such autonomy might be less desired, as people have less to gain from seeing themselves as being good at maximizing profits than from seeing themselves as being good in terms of possessing human nature attributes—and ultimately as good human beings. At the same time, people have more to lose from having full autonomy (instead of giving up autonomy to their decision support system) in for-profit decisions. As people believe that there are objectively right and wrong choices, selecting the right recipients will allow them to maximize their profits, while selecting others could lead to substantial losses and feelings of regret. In consequence, people might be willing to give up autonomy to others with a higher domain knowledge when they are pursuing a clear, objective goal in their decisions, as, for instance, in for-profit microlending. Yet, giving away autonomy to somebody or something else might undermine the perception that they personally (and thus autonomously) selected the option and thereby ultimately prevent the option from feeling right. Just as building a piece of furniture with one’s own hands positively affects how much one likes the furniture (Norton et al. 2012), making a prosocial decision with one’s own mind might also positively affect how much one feels connected to the recipient.

3 Hypotheses Development

Empathy and autonomy, the two factors whose perceived importance is shaped by the context, and, specifically, by contextual differences in terms of the contexts relevance for self-humanization (as described above), can be easily mapped onto the five categories of human nature attributes proposed in Haslam’s (2006) review paper: emotional responsiveness, interpersonal warmth, cognitive openness, agency, and depth. First, without empathy, perceiving what someone else feels is difficult, which closely links empathy with emotional responsiveness and interpersonal warmth. Coldness—the antagonist of interpersonal warmth—delimits itself from empathy. Second, without cognitive openness and agency, decision makers cannot make autonomous decisions—they are preconditions for autonomy. Third, the last category, depth, can again be linked to empathy: Without feeling as others feel, how can one achieve a deep understanding of their situation? It is thus not surprising that empathy is seen as one of the most important ways to prevent and overcome dehumanizing (Halpern and Weinstein 2004). Furthermore, autonomy is often withdrawn when someone is dehumanized by others (Haslam 2006), which emphasizes the importance of autonomy.
Another reason why these two factors, the importance of empathy and the importance of autonomy, are central for understanding the contextual differences in the relevance of human nature attributes is the transcendent, moral, and altruistic motives such contexts activate (Batson 1990; Eisenberg and Miller 1987). Moral decisions are often seen as deeply grounded in emotions (Gray et al. 2017; Haidt 2001) and in empathy in particular (Decety and Cowell 2014; Shaw et al. 1994). Prior research has also emphasized the close relationship between the ability to make moral judgments and autonomy—the ability to freely choose actions (Monroe et al. 2017; Nahmias et al. 2014).

3.1 Empathy

The first factor stemming from human nature attributes, empathy, is by its very nature not an objective criterion, as different people can have different empathic reactions to the same potential beneficiary of a prosocial decision (Cuff et al. 2016). As in previous research (Dickert et al. 2011; Pavey et al. 2012), we do not use empathy as a measure of individual differences, capacities, or abilities but rather focus on the context-specific importance people grant to this feeling toward others. People can, in consequence, perceive this feeling as more or less relevant for making decisions—which is why it is particularly important for making prosocial decisions (and generally less important for making for-profit decisions). Of course, this is not to say that empathy does not play any role at all in for-profit microlending decisions. For instance, it might allow decision makers to increase the accuracy of their inferences about the likelihood of paying back their loans (Moss et al. 2015) if they empathically understand the lenders’ motivations and emotions. However, in general, building on the idea that prosocial decisions are particularly relevant for self-humanization, we expect the context of prosocial microlending, compared to the context of for-profit microlending, to render decision makers’ feelings of empathy more important for making decisions.
H1
Users on prosocial microlending platforms place a higher importance on their empathy with the loan recipients than users on for-profit microlending platforms do.
We expect that the increased importance of empathy for prosocial microlending decisions (compared to for-profit microlending decisions) will ultimately translate into higher levels of algorithm aversion in these contexts, for three main reasons: First, to the extent that people see their own capacity to feel emotions and specifically to feel empathy with the beneficiary as being relevant to making a decision, they will prefer to receive decision support from other actors who also have this capacity. Users might attribute the capacity to feel emotions to other humans but not to computers, because they might be aware of the fact that computers cannot have feelings (Kahn et al. 2006). Additionally, people tend to seek social or parasocial relationships with the source of advice (Önkal et al. 2009; Prahl and van Swol 2017), which works much better when they are getting advice from a human. People do not want to feel empathy only toward the recipient of their prosocial loan but also toward the advisor who supports them in the decision process. However, as empathy is a part of human nature that cannot be possessed by computers (Castelo et al. 2019; Haslam 2006), their perceived lack of empathy for both loan recipients and the platform’s user could drive the user’s algorithm aversion (Jussupow et al. 2020).
A second reason for increased algorithm aversion is related to the algorithm aversion’s antecedents of cognitive incompatibility, divergent rationalities (Burton et al. 2020) and capability (Jussupow et al. 2020) that we introduced above. Because empathy is not a capability that is ascribed to computers (Castelo et al. 2019) and yet is perceived to be of great importance for prosocial decision tasks, the user can neither map the algorithm’s decision processes onto the task requirements nor fully understand and “translate” them. Relatedly, the importance of empathy (which is hard to quantify) makes other more objective criteria that might allow for computing probabilities and inferring objective rankings become relatively less relevant. For example, Small et al. (2007) showed that in charity domains, people base their decisions on affective reactions, which are not based on objective criteria (see also Slovic et al. 2006). In sum, decisions based on empathy might appear more unstructured and adapted to a world of uncertainty (where expected value calculations are by definition impossible), which creates a mismatch with algorithmic approaches that are usually based on the optimization of quantified criteria (only possible in a world of risk) (Burton et al. 2020). This mismatch, in turn, leads to algorithm aversion.
Third, research on the transparency of algorithms and the understandability of artificial intelligence (e.g., Rader et al. 2018; Shin and Park 2019) has shown that computer decision aids often appear to people as a black box—people do not understand how and why the decision aid has arrived at its recommendation. Especially in situations where empathy is perceived as important and the users are skeptical whether computers are capable of empathy, they will develop questions about how the algorithm works. For example, they may ask themselves: Does the algorithm include the personal story in its calculation? Would, or could, an algorithm incorporate my personal interests? Because they cannot look into the black box, it becomes difficult, if not impossible, to judge whether an algorithm is capable of taking into account or at least approximating subjective feelings like empathy. If people do not believe that algorithms can incorporate the importance they place on empathy, they will not follow the algorithm’s recommendations—they become algorithm averse.
H2
The more important empathy is for users, the higher their algorithm aversion.

3.2 Autonomy

The second factor stemming from human nature attributes, autonomy, can be understood in a very general way as the ability to make a decision freely and in a self-determined way. While people in general prefer more autonomy over less autonomy, the importance of autonomy differs across contexts (Deci and Ryan 2000). The distinction between giving and giving in, as two types of motivation for prosocial behavior (Andreoni et al. 2017; Cain et al. 2014; Dana et al. 2007), further illuminates such differences. Giving refers to prosocial behavior in which someone engages with full autonomy, willingly, and in the absence of any situational pressure. Giving in refers to reluctant prosocial behavior, in which someone engages, for instance, in response to concerns about reputation or social obligation. When people have the opportunity to avoid a situation in which they would be compelled to give in, they usually take it (Cain et al. 2014). Think, for instance, about a shopping center with two exits, in one of which a homeless person is sitting and begging for money. Research has found that more people choose the other exit, avoiding the situation. At the same time, people voluntarily sign up for fundraisers, volunteer in soup kitchens, and browse on prosocial microlending platforms. A key factor in distinguishing these two types of drivers of prosocial behavior is people’s perceived autonomy. People want to freely decide to help others rather than feeling compelled to do so because only a free, autonomous decision is relevant for self-humanization. They want to feel the warm glow because they made the decision. The same prosocial behavior would not feel as fulfilling if people performed it reluctantly, due to situational pressures.
The feeling that one has to make a loan in for-profit contexts because the opportunity for profit is too good to miss out on (i.e., situational pressures) is much less psychologically meaningful than freely and autonomously wanting to make a loan. Of course, some people enjoy the feeling of mastering the process of selecting highly profitable loans, but making such decisions autonomously does not reflect on their degree of self-humanization—of feeling fully human. Taking these considerations together, we hypothesize:
H3
Users on prosocial microlending platforms place a higher importance on their autonomy while making decisions than users on for-profit microlending platforms do.
This context-dependent desire people have to think and feel that they are making autonomous decisions is easily undermined when algorithms and decision support systems come into play (André et al. 2018; Calvo et al. 2020; Wertenbroch et al. 2020). The mere existence of the “human in the loop” discourse (Parasuraman et al. 2000; Sirajum Munir et al. 2013) underlines this point, although often, humans want more than to merely be in the loop. For instance, autonomy can be undermined by recommendations based on past preferences, which make the opinions and preferences of the individual persons unnaturally stable (André et al. 2018), by withholding alternative information or ill-fitting recommendations (Wertenbroch et al. 2020), and by interactive decision aids that restrict the user by adding constraints to the decision process (Pfeiffer et al. 2014). When people ascribe high importance to autonomy, they might be even more sensitive to those restrictions and more or less subtle influences by algorithms.
Additional evidence for the relationship between autonomy and algorithm aversion comes from Jussupow et al. (2020), although they use the related term agency, following Komiak and Benbasat (2006). They compared different types of algorithms, which they term performative and advisory algorithms. Performative algorithms decide or act completely autonomously, without a human’s involvement, leaving to the user only the option to delegate a task to the algorithm or not. Advisory algorithms follow a “human-in-the-loop” approach: the user always makes the final decision (see Bonaccio and Dalal 2006), giving them the autonomy to rely on or to ignore the algorithm’s advice. Prior research has indeed found that people are more averse to performative algorithms (Palmeira and Spassova 2015) and that they consider algorithmic advice more carefully when it comes from advisory algorithms (Jussupow et al. 2020), supporting the idea that they prefer to keep their autonomy and dislike losing control to the algorithm (Burton et al. 2020). In sum, there is a natural tension between the user’s autonomy and the system’s autonomy. If users want to use the algorithms and computerized decision support systems, they have to give at least some autonomy to them.
At the same time, placing a high importance on autonomy does not lead to an aversion to advice in general to the same extent that it leads to the more specific aversion to algorithmic advice. When it comes to decision support, people face the following conundrum: On the one hand, decision support could help them manage and extract the relevant information. This is especially important when people are pursuing a clear, objective goal by their decisions, such as in for-profit microlending, when they might select recipients with the best-fitting interest rate. On the other hand, decision support systems could undermine the perception that they personally (and thus autonomously) have selected the option. Algorithmic advice is especially likely to undermine this perceived autonomy if users see the algorithm as a black box and thus cannot understand how the algorithm’s advice was computed or if the assumed process the algorithm is following differs widely from the process that users would follow themselves (for instance, by placing less importance on empathy, as machines are assumed to be incapable of feeling empathy). Formally, we state this hypothesis as follows:
H4
The more important autonomy is for the users, the higher their algorithm aversion.

3.3 Human-like Decision Support as a Remedy to Dehumanization-induced Algorithm Aversion

When it comes to human advice, people have a much easier time navigating the conundrum mentioned above and balancing their own autonomy and the autonomy given over to the other agent. Extensive research on advice taking (and often advice rejecting, Logg et al. 2019) has demonstrated how well people in their social environments are attuned to navigating and negotiating this conundrum. When interacting with other humans, they can gain much information through social cues and interpersonal connections (Huang and Lin 2011; Joinson 2001; Moon 2000) without giving up independence and autonomy. When dealing with algorithms and computer systems, they may not only lack many of the social cues that underlie this ability but also lack the confidence in accepting and rejecting advice that comes from their extensive experience with human advice givers. Consequently, letting the user perceive the algorithm more as a human than a machine (Epley et al. 2007), by letting the algorithm have or at least imitate human-like characteristics (e.g., emotions), would be a way out of this conundrum.
As research involving the computers as social actors paradigm (CASA) has already demonstrated, humans often show social responses to computers that are comparable to those they show to humans when interacting with them (Nass et al. 1994; Nass and Moon 2000). Human-like decision support taps into this already-existing perception, building on people’s tendency to seek a social or parasocial relationship with the source of advice in certain contexts (Prahl and van Swol 2017). In other words, human-like decision support aims not only at communicating effectively but also at building a form of human connection to the user. By clothing the algorithmic decision support in human likeness, system-designers could ultimately prevent users from self-dehumanization in contexts in which they feel a tension between possessing human nature attributes and using (computerized) decision support systems.
The logic behind our next hypotheses is thus that people with high algorithm aversion do not want to use algorithms in a computer-like fashion. However, when algorithms come across as a more human-like decision support, many obstacles against algorithms may disappear or at least become less noticeable. Thus, we formally state the following:
H5
The higher the users’ algorithm aversion, the more they prefer human-like decision support.
We have already alluded to the role of autonomy and empathy in driving people’s algorithm aversion and ultimately in driving their preference for human-like decision support. Building on that, we expect the type of platform to have an overall effect on the acceptance of human-like decision support.
H6
Users prefer more human-like (and less computer-like) decision support on prosocial microlending platforms than on for-profit microlending platforms.
Figure 1 depicts the theoretical framework that we develop in this section.

4 Method

4.1 Independent Variables and Experimental Design and Procedure

We randomly assigned participants to either a prosocial or a for-profit microlending condition in a between-subjects experimental design (see Appendix A for the experimental stimuli; the appendices are available via http://​link.​springer.​com). Participants in the prosocial (for-profit) condition read an explanation of what prosocial (for-profit) peer-to-peer microlending is and saw three examples of what projects could look like (see online appendix Figs. 1, 2). Next to a project description and an abstract picture, the examples contained information on the loan amount, risk rating, and whether the entrepreneurs had repaid their former loans on time. In the for-profit condition, the only additional information shown was the loan’s interest rate.
Because we were not able to derive clear expectations about effect sizes from prior research, we aimed at a sample size that would allow us to detect a small effect. At the same time, we aimed at a sample size large enough to detect simple mediations in order to be able to gain deeper insights into the relationships in the research model. To do so, we relied on the power analysis of Fritz and Mackinnon (2007), which postulated an effect size from 0.14 in standardized betas for small effects. Such an effect size would require a sample size of 462 observations to be detected using a bias-corrected bootstrap with a power of 80%.
We recruited our participants via Amazon Mechanical Turk (MTurk). To ensure that our participants read the provided examples and introduction carefully, we added four comprehension questions at the beginning of the experiment (Goodman et al. 2013). Participants who failed at these questions were automatically excluded. In total, we ended up with 615 US-based participants. Each participant was paid $2 for completing the 15-min experiment. After we eliminated participants who tried to complete the experiment multiple times (n = 127), who had already participated in a pilot test of this experiment (n = 2), whose origin was not in the US (n = 2), or who made incorrect statements (e.g., an incorrect worker ID (n = 6), 478 participants remained in the final sample (female 40%, male 58%, other 1%, 1% who chose not to provide gender information; mean age = 39.88 with SD: 11.12).

4.2 Operationalization of the Dependent Variable

After reading the explanation of the respective experimental conditions, participants answered questions to measure the dependent and control variables. All items can be found in Appendix A.
As a manipulation check for our operationalization of the relevance of self-humanizing (i.e., prosocial versus for-profit context), we used a self-humanization scale based on human nature attributes from the scale of Ruttan and Lucas (2018), at the beginning of the questionnaire. A simple t-test confirms that participants in the prosocial condition indeed found the prosocial context to be more relevant for self-humanization (n = 242, mean = 5.2, SD = 0.98) than participants in the for-profit condition found the for-profit context (n = 236, mean = 4.4, SD = 1.18, t (476) = 8.14, p < 0.001, d = 0.74).
To measure the importance of empathy, we adopted two scales from the interpersonal reactivity index from Davis (1980, 1983). From the two scales-perspective taking and empathic concern-we created five 7-point Likert items, which we rephrased to fit our focus on the importance of empathy.
To measure the importance of autonomy, we developed a measure based on definitions of autonomy (Christman 2020; Janiesch et al. 2019; Wertenbroch et al. 2020). Additionally, we consulted the need of autonomy scale and the scale for autonomous motivation (Deci and Ryan 2000; Gagné 2003), which we rephrased to fit our focus on the importance of autonomy.
To measure algorithm aversion, we let our participants indicate on a 7-point Likert scale whether they would choose a human supporter or a computerized decision support. The question is one way to measure algorithm aversion (Jussupow et al. 2020) using the user’s choice algorithm aversion measurement and was adopted from Longoni et al. (2019). We also tried to measure algorithm aversion with one of the alternative approaches roughly proposed by Jussupow et al. (2020) that uses the user’s evaluation, including items on trust, appropriateness, and authenticity. If the algorithm is evaluated less favorably than the computer on these scales, this would be an indicator of algorithm aversion.
Our scale for human-like decision support is anchored in the theoretical base of dehumanizing and more precise in the human nature attributes. We adapted the items from Ruttan and Lucas (2018) and asked the participants about a list of human nature attributes that a decision support system should be capable of.
As controls, we asked participants about their basic demographics, such as age, gender, and where they currently live. In addition, we added some exploratory questions about the importance of different filters (such as loan amount left, risk rating, etc.).
As discussed above, there are many different causes of algorithm aversion. To rule them out as potential confounds, we added several questions to measure them. First, to measure expectations and expertise, we asked about the frequency with which users had previously used such a microlending platform. Second, to gather information about domain knowledge, we added a single question regarding experience with computerized decision support. Third, we added two scales from Bigman and Gray (2018) about the computer’s experiential capability and the computer’s capability to think, reason, and plan. Fourth, to measure incentivization through social norms (e.g., information about another user’s application of the algorithm; see Burton et al. (2020)), we also added a single question (all questions are listed in Appendix A). We did not control for three additional categories specified by Jussupow et al. (2020) and Burton et al. (2020)—performance, social distance, and human involvement—because they are of little relevance to our context.
For each multi-latent construct, we calculated one standardized factor based on the associated items. For the latent constructs, we examined the convergent and discriminant validity of the measurement instruments. The Cronbach’s alphas and composite reliabilities (CR) were greater than the suggested threshold of 0.70, and the values of the average variance extracted (AVE) were above the suggested minimum of 0.50 (see Table 1 in Appendix B), except for the importance of autonomy scale. All six items achieved only an AVE of 0.38 and a Cronbach’s alpha of 0.67, which suggested a potential issue with the convergent validity. A deeper analysis revealed that two questions loaded poorly on the construct, which was the reason for the low AVE. After removing these two items (3 and 6), we achieved an AVE of 0.49, which is in the acceptable range. Nevertheless, through the removal of these two items, Cronbach’s alpha declined (0.64), which was expected because Cronbach’s alpha is also driven by the count of items that are combined in the measurement scale. In favor of the higher convergent validity, we decided to base our analyses on the construct with the two removed items, but as a robustness check, we verified that the results remained robust toward testing the hypotheses with the complete 6-item scale.
To test the discriminant validity, we assessed the factor loadings and cross-loadings (Gefen and Straub 2005). All of the factors loaded higher on the assigned theoretical construct than on any other factor. An additional criterion for establishing discriminant validity demands that the square root of the AVE be larger than any correlation with another construct (Fornell and Larcker 1981). This criterion was also satisfied (see Table 1 in Appendix B). We concluded with the HTMT criterion, which is smaller than the threshold of 0.85 (Henseler et al. 2015). In sum, we concluded that our measures exhibited an adequate level of convergent and discriminant validity.

5 Results

Table 1 summarizes the descriptive statistics, and Table 2, along with Fig. 2, depicts the results of our statistical analyses. As Table 1 shows, on the 7-point Likert scale the participants on average rated higher in the prosocial condition than in the for-profit one.
Table 1
Descriptive statistics
Variable
Prosocial condition (N = 242)
For-profit condition(N = 236)
Mean
SD
Mean
SD
Relevance of self-humanizing
5.19
0.98
4.39
1.18
Importance of autonomy
5.23
1.00
4.83
1.06
Importance of empathy
4.98
1.13
3.88
1.50
Algorithm aversion
4.52
1.89
4.21
1.96
Preferred human-like decision support
4.40
1.42
3.72
1.49
Age
Mean = 39.88
SD = 11.12
N = 478
Table 2
Empirical results
Hypotheses and path
β
SE
P/CI
Supported?
H1 (\({a}_{1}\))
0.87
0.11
 < 0.001
yes
H2 (\({b}_{1}\))
0.28
0.07
 < 0.001
yes
H3 (\({a}_{2})\)
0.33
0.09
 < 0.001
yes
H4 (\({b}_{3}\))
0.33
0.08
 < 0.001
yes
Indirect effect (\({a}_{1}-{b}_{1}\))
0.24
0.07
[0.11; 0.39]
Indirect effect (\({a}_{2}-{b}_{3}\))
0.11
0.04
[0.04; 0.21]
H5 (\({c}_{1}\))
0.22
0.03
 < 0.001
yes
H6 (\({d}_{1}\))
0.67
0.13
 < 0.001
yes
The experimental condition was dummy-coded, with 0 = for-profit and 1 = prosocial. For indirect effects, we used bootstrapped bias-corrected confidence intervals (CI) (with 5000 resamples), following the recommendation of Preacher and Hayes (2004, 2008)
To test our hypotheses H1–H5, we used the seemingly unrelated regression (SUREG) framework, as it allowed us to test our hypotheses while including the control variables in the model and as it is suitable for using binary independent variables.2 For all analyses, we controlled for age and gender, and while for testing H2 and H4 (influence on algorithm aversion), we also controlled for the already mentioned causes of algorithm aversion: perceived domain knowledge, experience with computerized decision support, incentivization through social norms, and the perceived capability of a computer. Table 3 in Appendix B contains more in-depth information on the control variables. The already mentioned Fig. 2 illustrates our empirical model, with the dotted rectangle marking the SUREG model.
Our results support H1: participants placed a significantly higher importance on their empathy with the loan recipient in the prosocial experimental condition than in the for-profit experimental condition (β = 0.87; SE = 0.11; p < 0.001). In addition, H2 is supported, which means that a higher importance of empathy leads to higher algorithm aversion (β = 0.28; SE = 0.07; p < 0.001). In other words, participants in the prosocial condition reported a 0.87 higher importance of empathy on a 7-point Likert scale, and with a 1-point increase in this importance, participants reported a 0.28 higher algorithm aversion.
In addition, H3 is supported by our results: participants placed a significantly higher importance on their autonomy while making the decision in the prosocial condition than while making the decision in the for-profit condition (β = 0.33; SE = 0.09; p < 0.001). Furthermore, our model also supports H4, which means that higher importance of autonomy (β = 0.33; SE = 0.08; p < 0.001) leads to higher algorithm aversion.
To test the relationships postulated in H1 to H4 in detail, we ran a parallel mediation model, allowing the experimental condition to affect algorithm aversion through two mediators, empathy and autonomy. We also included the experimental condition as a direct effect on algorithm aversion in the model (see path \({b}_{2}\) in Fig. 2). This direct path b2 was not significant (see Table 2: β = 0.11; SE = 0.17; p = 0.57). Both indirect paths are significant (95% CI of empathy: [0.11; 0.39] and autonomy [0.04; 0.21]). Because of the significant indirect effects in combination with the non-significant direct effect, our mediation model can be classified as an indirect-only model (Zhao et al. 2010), often also described as full mediation.
Finally, yet importantly, our model also estimates the effect of algorithm aversion on the preference of human-like decision support. As can be seen in Table 2, the result is significant (β = 0.22; SE = 0.03; p < 0.001) and positive, supporting H5.
In order to test our last hypothesis (H6) about the total effect of the experimental condition on human-like decision support, we estimated a simple OLS. As hypothesized, in prosocial decisions (compared to for-profit contexts), human-like support is preferred (β = 0.67; SE = 0.13; p < 0.001). In other words, people reported a 0.67-point higher preference for human-like decision support in the prosocial condition than in the for-profit condition. In summary, our model supports all of our hypotheses.
To explore the robustness of our results, we ran the following three robustness checks, which all followed the same basis specification outlined in Fig. 2. As a first robustness check, we estimated our model without any control variables (results in Table 2 in Appendix B). This led to consistent results with our model reported above, with one meaningful difference: the importance of empathy no longer had a statistically significant effect on algorithm aversion (β = 0.03; SE = 0.7; p = 0.68, see Table 2). In consequence, the indirect effect through empathy was also no longer significant. As algorithm aversion is generally conceived of as a multi-determined phenomenon, not adjusting for other known mechanisms (perceived domain knowledge, experience with computerized decision support, incentivization through social norms, and the perceived capability of a computer) might lead to noisy and biased results (i.e., omitted variables bias). That being said, future research into the intricacies of the relationship between algorithm aversion, the importance of empathy, and the control variables would be needed to illuminate this discrepancy between the different models more thoroughly.
As a second robustness check, we reran our models while additionally including the two omitted items from the importance of autonomy scale. As a third check, we used the alternative algorithm aversion scale based on the evaluation instead of the choice. The robustness checks suggest that our results are robust with regard to these different specifications and the inclusion of these items.

6 Discussion

Contexts vary in their affordances for self-humanization. While prosocial contexts are highly relevant for and diagnostic of self-humanization (and human nature attributes in particular), for-profit contexts, by comparison, suppress self-humanization goals. In this paper, we theorize that these differences across contexts lead people to place more importance on empathy and autonomy in prosocial contexts (compared to for-profit contexts) and thereby ultimately induce context-specific algorithm aversion. Human-like decision support holds the promise of remedying this self-humanization-driven algorithm aversion. The results from our experiment lend support to our hypotheses.
First, our experiment shows that decision contexts influence the relevance of self-humanizing (self-humanization was higher in the prosocial than in the for-profit context). The idea that self-humanization is affected by the decision context of digital platforms is, to the best of our knowledge, new and has further implications. Understanding the mechanism of self-humanizing might help us understand what users want from decision support systems and, therefore, how they should be designed. One potential direction for future research would be to broaden the scope of different decision contexts and to investigate the context-specific implications for algorithm aversion and the type of decision support people prefer. The five categories of human nature attributes (emotional responsiveness, interpersonal warmth, cognitive openness, agency, and depth) might carry considerable context-specific implications. For example, there might be domains in which cognitive openness is particularly important for decisions, potentially connecting to research on computational creativity in artificial intelligence (Bentley and Corne 2002; Colton et al. 2012). Moreover, while we concentrated on human nature attributes, uniquely human attributes might also play an interesting role in the design of decision support systems. For instance, they might tighten the connection between humans and algorithms, because these attributes can be shared with machines. In particular, when users consider attributes such as logic and rationality to be important criteria, algorithm aversion might decrease, potentially enabling the acceptance of different kinds of decision support systems.
Second, our experimental results provide further evidence for the idea that empathy is a major factor when it comes to prosocial behavior, and especially that this is also the case for prosocial decisions on digital platforms (H1). We thereby expand on existing research, which has demonstrated the importance of empathy in (non-digital) prosocial behavior (Batson et al. 1987; Davis 2015; Loewenstein and Small 2007; Small and Cryder 2016). Moreover, our results lend support to the idea that autonomy is particularly important for prosocial behavior (H3), which is consistent with prior research—for instance, with the results from Weinstein and Ryan (2010), Gagné (2003), and Pavey et al. (2012). We can conclude that participants want not only to feel empathy for a beneficiary, but also to choose and decide freely in favor of a specific beneficiary.
Third, our experimental results support the proposition that empathy (H2) and autonomy (H4) lead to higher algorithm aversion. We thereby contribute to the burgeoning research stream on the antecedents for algorithm aversion (Burton et al. 2020; Jussupow et al. 2020). In particular, we find that when empathy is seen as an important capability for performing a task, humans as advisors have clear advantages over computers because feeling empathy is a human nature attribute. This finding is obviously related to existing causes of algorithm aversion, such as cognitive incompatibility and divergent rationalities between computers and humans. Furthermore, we find support for the argument that interacting with a human instead of a computer might help users control the process and balance their own autonomy and the autonomy given to the other agent (human or computer).
Fourth, our results provide evidence that algorithm aversion has direct implications for the preferred type of support system. More concretely, a higher algorithm aversion generates the desire to have more human-like decision support, which builds on and connects to the work of Castelo et al. (2019), who already showed that human-likeness could enhance the use of algorithms in more subjective tasks, and to the work of Seeger et al. (2021), who discussed human-like versus computer-like tasks. The question of how to achieve human-like decision support remains highly relevant for the field.
Computerized agents might be seen as missing some human nature attributes, as argued earlier, such as the ability to experience (moral) authenticity (Bigman and Gray 2018; Jago 2019) or empathy. Research on human-computer interaction has already recognized this issue (Picard 2003) and points to potential ways to overcome those deficits—for example, by the use of bio signals like EEG (Song et al. 2020), eye-tracking (Bradley et al. 2008; Pfeiffer et al. 2020), or facial expression (Li and Deng 2020), which allows the system to detect the feelings of the user and thus can take them into account for its suggestions. Another possibility is, as mentioned previously, the anthropomorphization of the decision support system, which is also a new and growing research field. An anthropomorphized conversational agent could emulate human-to-human communication (e.g., using natural language) (Schuetzler et al. 2014). The use of natural language is only one of many possibilities of creating more human-like decision support (Gnewuch et al. 2017). The literature on anthropomorphized conversational agents suggests different cues (Seeger et al. 2021), such as human identity cues (e.g., visual representation (Qiu and Benbasat 2009)), verbal cues (e.g., emotional expressions (de Visser et al. 2016)), context-sensitive responses (Knijnenburg and Willemsen 2016), and non-verbal cues (e.g., emoticons and response delays (Gnewuch et al. 2018)). Yet, even without going to the great length of simulating a complete human conversation, developing a deeper understanding of how self-humanization goals drive people to prefer human-like decision support systems is a fruitful starting point for designing and fine-tuning various sustainable decision support systems.
Finally, we show not only that users in prosocial contexts prefer human-like decision support more strongly than in for-profit contexts, but also that both in prosocial and for-profit contexts, users value human-like decision support (t-tests of the means values with the scale average of 3.5 of the human-like decision support scale support this finding; prosocial mean = 4.40, SD = 1.42; t(241) = 9.90, p < 0.001; for-profit mean = 3.72, SD = 1.50; t(235) = 2.30, p = 0.01). As described at the very beginning of this paper, human-like decision support is on the rise, and the observation that these support systems are preferred in both types of microlending is thus quite revealing. We have to point out, though, that in both types of microlending, a human is the receiver of the loan. On platforms where humans are not “part” of the choice set from which people choose, for example when the alternatives are share-trading options, we would expect that human-like attributes might be of less importance.

7 Contributions, Limitations, and Future Research

Our results have implications for both theory and practice. We contribute to theory by highlighting self-humanizing as an important theoretical lens for understanding contextual differences in algorithm aversion. Contexts differ in their affordances for self-humanization, and the two mechanisms outlines in our framework—the importance of empathy and the importance of autonomy—connect these contextual differences with users’ degree of algorithm aversion and, ultimately, their preference for human-like decision support. To the best of our knowledge, autonomy and empathy have not been considered in parallel before in the field of digital platforms, although our results indicate that they should be considered when theorizing about user behavior on prosocial platforms. They complement other factors that have often been studied, such as ease of use, perceived usefulness, and enjoyment (Dwivedi et al. 2015; Gefen and Straub 2000; Pavalou 2003), and future research is needed to investigate the interaction between these factors and empathy as well as autonomy.
Our research also has additional practical implications for designing sustainable decision support systems. At the current stage of technological development, it would be possible to create a conversational agent that is able to fulfill the user’s need for empathy and autonomy while lowering algorithm aversion through means like anthropomorphizing, the use of facial expressions, and emotion detection. It is even possible that such a system would not only help the user with a one-time usage of a platform, but also reinforce future prosocial actions (Penner 2002) and thereby increase the overall welfare in the world.
One limitation of the current research is that our experiment did not use actual users of microlending platforms, but MTurkers as participants, although MTurk samples might be more representative as student samples (Chandler et al. 2014). Moreover, several comprehension checks were implemented in an attempt to prevent concerns about speedy and low-effort responses. Another limitation is that the usual caveats of using mediation models for cross-sectional data apply, and we encourage future research to replicate our results to confirm their robustness. An ecologically valid field experiment that moves beyond hypothetical questionnaire responses to consequential lending decisions would be particularly desirable.
Yet another limitation lies in our measurement scale for the importance of autonomy, which could be improved in terms of convergent validity. All items should be analyzed carefully, and a new, more extensive and reliable scale should be developed based on the definition of autonomy. Finally, our robustness checks suggest that the relationship between the importance of empathy and algorithm aversion could be more complicated. Future research should further explore the interplay between empathy, algorithm aversion, and its other antecedents proposed by prior research.
The theoretical framework of self-humanization might provide guidance not only on how to design a decision support system with attributes that are typical for a human (i.e., human-like), but also on other aspects of decision support—for example, the point of time when the support is provided. A decision support system that provides support right at the beginning of a decision process might decrease self-humanization because by restricting user autonomy early-on, it might not leave the user room to fully feel as a human. In contrast, a system that steps in later in the process could give the users the opportunity to fulfill their self-humanization needs first, for example, when the users have had the chance to develop emotional responsiveness to alternatives being decided upon, or to create a feeling of interpersonal warmth or agency without being interrupted or undermined by a technical support system. We propose that future research should investigate the influence of the point of time of decision support on self-humanization and its implications for the design of decision support systems.
There is much research on decision support systems, such as recommender and interactive decision aids, but very little on the interplay between those systems and the decision context in which the user is acting. When and which type of decision support should we use? How does the decision context affect the relevance of self-humanization? In turn, how important are different factors to users, such as their empathy and autonomy, and how should decision support systems interact with users? As we believe this paper illustrates, much can be gained by bridging multiple research fields and by integrating insights from the psychology into the research field of Information Systems about why and when people act prosocially and lend money. The very existence of algorithm aversion might suggest that many IT artifacts are developed with a focus not on the human but on rather instrumental objectives, such as economic goals. This focus, while often taken for granted and not explicitly acknowledged, can lead to dehumanization (Moore and Piwek 2017). By introducing self-humanization as a theoretical framework, our paper highlights the importance of and facilitates the integration of humanistic values into Information Systems research. We thus contribute to the recently raised call for a stronger sociotechnical perspective in Information Systems (e.g., made by Sarker et al. (2019).
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Unsere Produktempfehlungen

WIRTSCHAFTSINFORMATIK

WI – WIRTSCHAFTSINFORMATIK – ist das Kommunikations-, Präsentations- und Diskussionsforum für alle Wirtschaftsinformatiker im deutschsprachigen Raum. Über 30 Herausgeber garantieren das hohe redaktionelle Niveau und den praktischen Nutzen für den Leser.

Business & Information Systems Engineering

BISE (Business & Information Systems Engineering) is an international scholarly and double-blind peer-reviewed journal that publishes scientific research on the effective and efficient design and utilization of information systems by individuals, groups, enterprises, and society for the improvement of social welfare.

Wirtschaftsinformatik & Management

Texte auf dem Stand der wissenschaftlichen Forschung, für Praktiker verständlich aufbereitet. Diese Idee ist die Basis von „Wirtschaftsinformatik & Management“ kurz WuM. So soll der Wissenstransfer von Universität zu Unternehmen gefördert werden.

Anhänge

Supplementary Information

Below is the link to the electronic supplementary material.
Fußnoten
1
For practical examples, see the AI Companion from Luka https://​replika.​ai/​ or Kuki AI from Pandora https://​www.​kuki.​ai/​.
 
2
All calculations were performed with the software STATA/SE 16.1.
 
Literatur
Zurück zum Zitat Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138–52160CrossRef Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6:52138–52160CrossRef
Zurück zum Zitat Allison TH, McKenny AF, Short JC (2013) The effect of entrepreneurial rhetoric on microlending investment: an examination of the warm-glow effect. J Bus Ventur 28:690–707CrossRef Allison TH, McKenny AF, Short JC (2013) The effect of entrepreneurial rhetoric on microlending investment: an examination of the warm-glow effect. J Bus Ventur 28:690–707CrossRef
Zurück zum Zitat Allison T, Davis B, Short J, Webb J (2015) Crowdfunding in a prosocial microlending environment: examining the role of intrinsic versus extrinsic cues. Entrep Theory Pract 39:53–73CrossRef Allison T, Davis B, Short J, Webb J (2015) Crowdfunding in a prosocial microlending environment: examining the role of intrinsic versus extrinsic cues. Entrep Theory Pract 39:53–73CrossRef
Zurück zum Zitat André Q, Carmon Z, Wertenbroch K, Crum A, Frank D, Goldstein W, Huber J, van Boven L, Weber B, Yang H (2018) Consumer choice and autonomy in the age of artificial intelligence and big data. Cust Need Solut 5:28–37CrossRef André Q, Carmon Z, Wertenbroch K, Crum A, Frank D, Goldstein W, Huber J, van Boven L, Weber B, Yang H (2018) Consumer choice and autonomy in the age of artificial intelligence and big data. Cust Need Solut 5:28–37CrossRef
Zurück zum Zitat Andreoni J (1990) Impure altruism and donations to public goods: a theory of warm-glow giving. Econ Theory 100:464 Andreoni J (1990) Impure altruism and donations to public goods: a theory of warm-glow giving. Econ Theory 100:464
Zurück zum Zitat Andreoni J, Rao JM, Trachtman H (2017) Avoiding the ask: a field experiment on altruism, empathy, and charitable giving. J Polit Econ 125:625–653CrossRef Andreoni J, Rao JM, Trachtman H (2017) Avoiding the ask: a field experiment on altruism, empathy, and charitable giving. J Polit Econ 125:625–653CrossRef
Zurück zum Zitat Bain PG, Kashima Y, Haslam N (2006) Conceptual Beliefs about human values and their implications: human nature beliefs predict value importance, value trade-offs, and responses to value-laden rhetoric. J Pers Soc Psychol 91:351–367CrossRef Bain PG, Kashima Y, Haslam N (2006) Conceptual Beliefs about human values and their implications: human nature beliefs predict value importance, value trade-offs, and responses to value-laden rhetoric. J Pers Soc Psychol 91:351–367CrossRef
Zurück zum Zitat Barasch A, Levine EE, Berman JZ, Small DA (2014) Selfish or selfless? On the signal value of emotion in altruistic behavior. J Pers Soc Psychol 107:393–413CrossRef Barasch A, Levine EE, Berman JZ, Small DA (2014) Selfish or selfless? On the signal value of emotion in altruistic behavior. J Pers Soc Psychol 107:393–413CrossRef
Zurück zum Zitat Barredo Arrieta A, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, Garcia S, Gil-Lopez S, Molina D, Benjamins R, Chatila R, Herrera F (2020) Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion 58:82–115CrossRef Barredo Arrieta A, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A, Garcia S, Gil-Lopez S, Molina D, Benjamins R, Chatila R, Herrera F (2020) Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf Fusion 58:82–115CrossRef
Zurück zum Zitat Batson CD (1990) How Social an animal? The human capacity for caring. Am Psychol 45:336CrossRef Batson CD (1990) How Social an animal? The human capacity for caring. Am Psychol 45:336CrossRef
Zurück zum Zitat Batson CD (2014) The altruism question: toward a social-psychological answer. Psychology Press, New YorkCrossRef Batson CD (2014) The altruism question: toward a social-psychological answer. Psychology Press, New YorkCrossRef
Zurück zum Zitat Batson CD, Fultz J, Schoenrade PA (1987) Distress and empathy: two qualitatively distinct vicarious emotions with different motivational consequences. J Pers 55:19–39CrossRef Batson CD, Fultz J, Schoenrade PA (1987) Distress and empathy: two qualitatively distinct vicarious emotions with different motivational consequences. J Pers 55:19–39CrossRef
Zurück zum Zitat Bentley PJ, Corne DW (2002) An introduction to creative evolutionary systems. Creative Evo Syst 1–75 Bentley PJ, Corne DW (2002) An introduction to creative evolutionary systems. Creative Evo Syst 1–75
Zurück zum Zitat Berman JZ, Barasch A, Levine EE, Small DA (2018) Impediments to effective altruism: the role of subjective preferences in charitable giving. Psychol Sci 29:834–844CrossRef Berman JZ, Barasch A, Levine EE, Small DA (2018) Impediments to effective altruism: the role of subjective preferences in charitable giving. Psychol Sci 29:834–844CrossRef
Zurück zum Zitat Bigman YE, Gray K (2018) People are Averse to machines making moral decisions. Cognition 181:21–34CrossRef Bigman YE, Gray K (2018) People are Averse to machines making moral decisions. Cognition 181:21–34CrossRef
Zurück zum Zitat Bonaccio S, Dalal RS (2006) Advice taking and decision-making: an integrative literature review, and implications for the organizational sciences. Organ Behav Hum Dec Processes 101:127–151CrossRef Bonaccio S, Dalal RS (2006) Advice taking and decision-making: an integrative literature review, and implications for the organizational sciences. Organ Behav Hum Dec Processes 101:127–151CrossRef
Zurück zum Zitat Bradley MM, Miccoli L, Escrig MA, Lang PJ (2008) The pupil as a measure of emotional arousal and autonomic activation. Psychophysiology 45:602–607CrossRef Bradley MM, Miccoli L, Escrig MA, Lang PJ (2008) The pupil as a measure of emotional arousal and autonomic activation. Psychophysiology 45:602–607CrossRef
Zurück zum Zitat Bruton GD, Khavul S, Chavez H (2011) Microlending in emerging economies: building a new line of inquiry from the ground up. J Int Bus Stud 42:718–739CrossRef Bruton GD, Khavul S, Chavez H (2011) Microlending in emerging economies: building a new line of inquiry from the ground up. J Int Bus Stud 42:718–739CrossRef
Zurück zum Zitat Burton JW, Stein M-K, Jensen TB (2020) A Systematic review of algorithm aversion in augmented decision making. J Behav Decis Making 33:220–239CrossRef Burton JW, Stein M-K, Jensen TB (2020) A Systematic review of algorithm aversion in augmented decision making. J Behav Decis Making 33:220–239CrossRef
Zurück zum Zitat Cain DM, Dana J, Newman GE (2014) Giving versus giving in. Acad Manag Ann 8:505–533CrossRef Cain DM, Dana J, Newman GE (2014) Giving versus giving in. Acad Manag Ann 8:505–533CrossRef
Zurück zum Zitat Calvo RA, Peters D, Vold K, Ryan RM (2020) Supporting human autonomy in AI systems: a framework for ethical enquiry. Ethics of digital well-being. Springer, Cham, pp 31–54CrossRef Calvo RA, Peters D, Vold K, Ryan RM (2020) Supporting human autonomy in AI systems: a framework for ethical enquiry. Ethics of digital well-being. Springer, Cham, pp 31–54CrossRef
Zurück zum Zitat Castelo N, Bos MW, Lehmann DR (2019) Task-dependent algorithm aversion. J Mark Res 56:809–825CrossRef Castelo N, Bos MW, Lehmann DR (2019) Task-dependent algorithm aversion. J Mark Res 56:809–825CrossRef
Zurück zum Zitat Caviola L, Schubert S, Nemirow J (2020) The Many obstacles to effective giving. Judgm Decis Mak 15:159 Caviola L, Schubert S, Nemirow J (2020) The Many obstacles to effective giving. Judgm Decis Mak 15:159
Zurück zum Zitat Chandler J, Mueller P, Paolacci G (2014) Nonnaïveté among amazon mechanical Turk workers: consequences and solutions for behavioral researchers. Behav Res Methods 46:112–130CrossRef Chandler J, Mueller P, Paolacci G (2014) Nonnaïveté among amazon mechanical Turk workers: consequences and solutions for behavioral researchers. Behav Res Methods 46:112–130CrossRef
Zurück zum Zitat Christman J (2020) Autonomy in moral and political philosophy. In: Metaphysics research lab, Stanford university (ed) the Stanford encyclopedia of philosophy Christman J (2020) Autonomy in moral and political philosophy. In: Metaphysics research lab, Stanford university (ed) the Stanford encyclopedia of philosophy
Zurück zum Zitat Colton S, Wiggins GA, others (2012) Computational creativity: the final frontier?. In: proceedings of the 20th European conference on artificial intelligence, Montpellier, pp 21–26 Colton S, Wiggins GA, others (2012) Computational creativity: the final frontier?. In: proceedings of the 20th European conference on artificial intelligence, Montpellier, pp 21–26
Zurück zum Zitat Cuff BM, Brown SJ, Taylor L, Howat DJ (2016) Empathy: a review of the concept. Emot Rev 8:144–153CrossRef Cuff BM, Brown SJ, Taylor L, Howat DJ (2016) Empathy: a review of the concept. Emot Rev 8:144–153CrossRef
Zurück zum Zitat Dana J, Weber RA, Kuang JX (2007) Exploiting Moral wiggle room: experiments demonstrating an illusory preference for fairness. Econ Theory 33:67–80CrossRef Dana J, Weber RA, Kuang JX (2007) Exploiting Moral wiggle room: experiments demonstrating an illusory preference for fairness. Econ Theory 33:67–80CrossRef
Zurück zum Zitat Davis MH (1980) A multidimensional approach to individual differences in empathy. American Psychological Association, Washington, DC Davis MH (1980) A multidimensional approach to individual differences in empathy. American Psychological Association, Washington, DC
Zurück zum Zitat Davis MH (1983) Measuring individual differences in empathy: evidence for a multidimensional approach. J Pers Soc Psychol 44:113CrossRef Davis MH (1983) Measuring individual differences in empathy: evidence for a multidimensional approach. J Pers Soc Psychol 44:113CrossRef
Zurück zum Zitat Davis MH (2015) Empathy and prosocial behavior. In: Schroeder DA (ed) The Oxford handbook of prosocial behavior. Oxford Univ. Press, Oxford Davis MH (2015) Empathy and prosocial behavior. In: Schroeder DA (ed) The Oxford handbook of prosocial behavior. Oxford Univ. Press, Oxford
Zurück zum Zitat Dawes RM (1979) The Robust beauty of improper linear models in decision making. Am Psychol 34:571–582CrossRef Dawes RM (1979) The Robust beauty of improper linear models in decision making. Am Psychol 34:571–582CrossRef
Zurück zum Zitat de Visser EJ, Monfort SS, McKendrick R, Smith MAB, McKnight PE, Krueger F, Parasuraman R (2016) Almost human: anthropomorphism increases trust resilience in cognitive agents. J Exp Psychol 22:331–349 de Visser EJ, Monfort SS, McKendrick R, Smith MAB, McKnight PE, Krueger F, Parasuraman R (2016) Almost human: anthropomorphism increases trust resilience in cognitive agents. J Exp Psychol 22:331–349
Zurück zum Zitat Decety J, Cowell JM (2014) The Complex relation between morality and empathy. Trends Cogn Sci 18:337–339CrossRef Decety J, Cowell JM (2014) The Complex relation between morality and empathy. Trends Cogn Sci 18:337–339CrossRef
Zurück zum Zitat Deci EL, Ryan RM (1985) The General Causality orientations scale: self-determination in personality. J Res Pers 19:109–134CrossRef Deci EL, Ryan RM (1985) The General Causality orientations scale: self-determination in personality. J Res Pers 19:109–134CrossRef
Zurück zum Zitat Deci EL, Ryan RM (2000) The “what” and “why” of goal pursuits: human needs and the self-determination of behavior. Psychol Inq 11:227–268CrossRef Deci EL, Ryan RM (2000) The “what” and “why” of goal pursuits: human needs and the self-determination of behavior. Psychol Inq 11:227–268CrossRef
Zurück zum Zitat Dickert S, Sagara N, Slovic P (2011) Affective motivations to help others: a two-stage model of donation decisions. J Behav Decis Making 24:361–376CrossRef Dickert S, Sagara N, Slovic P (2011) Affective motivations to help others: a two-stage model of donation decisions. J Behav Decis Making 24:361–376CrossRef
Zurück zum Zitat Dietvorst BJ, Simmons JP, Massey C (2015) Algorithm aversion: people erroneously avoid algorithms after seeing them err. J Exp Psychol Gen 144:114–126CrossRef Dietvorst BJ, Simmons JP, Massey C (2015) Algorithm aversion: people erroneously avoid algorithms after seeing them err. J Exp Psychol Gen 144:114–126CrossRef
Zurück zum Zitat Dietvorst BJ, Simmons JP, Massey C (2018) Overcoming Algorithm aversion: people will use imperfect algorithms if they can (even slightly) modify them. Manag Sci 64:1155–1170CrossRef Dietvorst BJ, Simmons JP, Massey C (2018) Overcoming Algorithm aversion: people will use imperfect algorithms if they can (even slightly) modify them. Manag Sci 64:1155–1170CrossRef
Zurück zum Zitat Dunn EW, Aknin LB, Norton MI (2014) Prosocial spending and happiness. Curr Dir Psychol Sci 23:41–47CrossRef Dunn EW, Aknin LB, Norton MI (2014) Prosocial spending and happiness. Curr Dir Psychol Sci 23:41–47CrossRef
Zurück zum Zitat Dwivedi YK, Wastell D, Laumer S, Henriksen HZ, Myers MD, Bunker D, Elbanna A, Ravishankar MN, Srivastava SC (2015) Research on Information Systems failures and successes: status update and future directions. Inf Syst Front 17:143–157CrossRef Dwivedi YK, Wastell D, Laumer S, Henriksen HZ, Myers MD, Bunker D, Elbanna A, Ravishankar MN, Srivastava SC (2015) Research on Information Systems failures and successes: status update and future directions. Inf Syst Front 17:143–157CrossRef
Zurück zum Zitat Eisenberg N, Miller PA (1987) the relation of empathy to prosocial and related behaviors. Psychol Bull 101:91–119CrossRef Eisenberg N, Miller PA (1987) the relation of empathy to prosocial and related behaviors. Psychol Bull 101:91–119CrossRef
Zurück zum Zitat Epley N, Waytz A, Cacioppo JT (2007) On Seeing Human: a three-factor theory of anthropomorphism. Psychol Rev 114:864–886CrossRef Epley N, Waytz A, Cacioppo JT (2007) On Seeing Human: a three-factor theory of anthropomorphism. Psychol Rev 114:864–886CrossRef
Zurück zum Zitat Fritz MS, Mackinnon DP (2007) Required sample size to detect the mediated effect. Psychol Sci 18:233–239CrossRef Fritz MS, Mackinnon DP (2007) Required sample size to detect the mediated effect. Psychol Sci 18:233–239CrossRef
Zurück zum Zitat Gagné M (2003) The role of autonomy support and autonomy orientation in prosocial behavior engagement. Motiv Emot 27:199–223CrossRef Gagné M (2003) The role of autonomy support and autonomy orientation in prosocial behavior engagement. Motiv Emot 27:199–223CrossRef
Zurück zum Zitat Galak J, Small D, Stephen AT (2011) Microfinance decision making: a field study of prosocial lending. J Mark Res 48:130–137CrossRef Galak J, Small D, Stephen AT (2011) Microfinance decision making: a field study of prosocial lending. J Mark Res 48:130–137CrossRef
Zurück zum Zitat Gefen David, Straub Detmar (2000) The relative importance of perceived ease of use in IS adoption: a study of e-commerce adoption. J Assoc Inf Syst 1(1):1–30 Gefen David, Straub Detmar (2000) The relative importance of perceived ease of use in IS adoption: a study of e-commerce adoption. J Assoc Inf Syst 1(1):1–30
Zurück zum Zitat Gnewuch U, Morana S, Maedche A (2017) Towards designing cooperative and social conversational agents for customer service. In: Proceedings of the international conference on information systems, Seoul Gnewuch U, Morana S, Maedche A (2017) Towards designing cooperative and social conversational agents for customer service. In: Proceedings of the international conference on information systems, Seoul
Zurück zum Zitat Gnewuch U, Morana S, Adam M, Maedche A (2018) Faster is not always better: understanding the effect of dynamic response delays in human-chatbot interaction. In: European conference on information systems, Portsmouth Gnewuch U, Morana S, Adam M, Maedche A (2018) Faster is not always better: understanding the effect of dynamic response delays in human-chatbot interaction. In: European conference on information systems, Portsmouth
Zurück zum Zitat Goodman JK, Cryder CE, Cheema A (2013) Data collection in a flat world: the strengths and weaknesses of Mechanical turk samples. J Behav Decis Making 26:213–224CrossRef Goodman JK, Cryder CE, Cheema A (2013) Data collection in a flat world: the strengths and weaknesses of Mechanical turk samples. J Behav Decis Making 26:213–224CrossRef
Zurück zum Zitat Gordon C, Leuski A, Benn G, Klassen E, Fast E, Liewer M, Hartholt A, Traum DR (2019) PRIMER: an emotionally aware virtual agent. In: IUI Workshops, Los Angeles Gordon C, Leuski A, Benn G, Klassen E, Fast E, Liewer M, Hartholt A, Traum DR (2019) PRIMER: an emotionally aware virtual agent. In: IUI Workshops, Los Angeles
Zurück zum Zitat Gray K, Schein C, Cameron CD (2017) How to Think about emotion and morality: circles, not arrows. Curr Opin Psychol 17:41–46CrossRef Gray K, Schein C, Cameron CD (2017) How to Think about emotion and morality: circles, not arrows. Curr Opin Psychol 17:41–46CrossRef
Zurück zum Zitat Grove WM, Zald DH, Lebow BS, Snitz BE, Nelson C (2000) Clinical versus mechanical prediction: a meta-analysis. Psychol Assessment 12:19–30CrossRef Grove WM, Zald DH, Lebow BS, Snitz BE, Nelson C (2000) Clinical versus mechanical prediction: a meta-analysis. Psychol Assessment 12:19–30CrossRef
Zurück zum Zitat Haas P, Blohm I, Leimeister JM (2014) An empirical taxonomy of crowdfunding intermediaries. In: Proceedings of the international conference on information systems-building a better world through information systems. AIS Electronic Library (AISeL) Haas P, Blohm I, Leimeister JM (2014) An empirical taxonomy of crowdfunding intermediaries. In: Proceedings of the international conference on information systems-building a better world through information systems. AIS Electronic Library (AISeL)
Zurück zum Zitat Hafenbrädl S, Waeger D, Marewski JN, Gigerenzer G (2016) Applied decision making with fast-and-frugal heuristics. J Appl Res Mem Cogn 5:215–231CrossRef Hafenbrädl S, Waeger D, Marewski JN, Gigerenzer G (2016) Applied decision making with fast-and-frugal heuristics. J Appl Res Mem Cogn 5:215–231CrossRef
Zurück zum Zitat Haidt J (2001) The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychol Rev 108:814–834CrossRef Haidt J (2001) The emotional dog and its rational tail: a social intuitionist approach to moral judgment. Psychol Rev 108:814–834CrossRef
Zurück zum Zitat Halpern J, Weinstein HM (2004) Rehumanizing the other: empathy and reconciliation. Hum Rights Q 26:561–583CrossRef Halpern J, Weinstein HM (2004) Rehumanizing the other: empathy and reconciliation. Hum Rights Q 26:561–583CrossRef
Zurück zum Zitat Hamilton DL, Sherman SJ (1996) Perceiving persons and groups. Psychol Rev 103:336–355CrossRef Hamilton DL, Sherman SJ (1996) Perceiving persons and groups. Psychol Rev 103:336–355CrossRef
Zurück zum Zitat Haslam N (2006) Dehumanization: an integrative review. Pers Soc Psychol Rev 10:252–264CrossRef Haslam N (2006) Dehumanization: an integrative review. Pers Soc Psychol Rev 10:252–264CrossRef
Zurück zum Zitat Haslam N, Rothschild L, Ernst D (2000) Essentialist beliefs about social categories. Br J Soc Psychol 39(Pt 1):113–127CrossRef Haslam N, Rothschild L, Ernst D (2000) Essentialist beliefs about social categories. Br J Soc Psychol 39(Pt 1):113–127CrossRef
Zurück zum Zitat Haslam N, Bastian B, Bissett M (2004) Essentialist beliefs about personality and their implications. Pers Soc Psychol B 30:1661–1673CrossRef Haslam N, Bastian B, Bissett M (2004) Essentialist beliefs about personality and their implications. Pers Soc Psychol B 30:1661–1673CrossRef
Zurück zum Zitat Haslam N, Bain P, Douge L, Lee M, Bastian B (2005) More Human than you: attributing humanness to self and others. J Pers Soc Psychol 89:937–950CrossRef Haslam N, Bain P, Douge L, Lee M, Bastian B (2005) More Human than you: attributing humanness to self and others. J Pers Soc Psychol 89:937–950CrossRef
Zurück zum Zitat Haslam N, Kashima Y, Loughnan S, Shi J, Suitner C (2008) Subhuman, inhuman, and superhuman: contrasting humans with nonhumans in three cultures. Soc Cogn 26:248–258CrossRef Haslam N, Kashima Y, Loughnan S, Shi J, Suitner C (2008) Subhuman, inhuman, and superhuman: contrasting humans with nonhumans in three cultures. Soc Cogn 26:248–258CrossRef
Zurück zum Zitat Häubl G, Trifts V (2000) Consumer decision making in online shopping environments: the effects of interactive decision aids. Mark Sci 19:4–21CrossRef Häubl G, Trifts V (2000) Consumer decision making in online shopping environments: the effects of interactive decision aids. Mark Sci 19:4–21CrossRef
Zurück zum Zitat Henseler J, Ringle CM, Sarstedt M (2015) A new criterion for assessing discriminant validity in variance-based structural equation modeling. J Acad Mark Sci 43:115–135CrossRef Henseler J, Ringle CM, Sarstedt M (2015) A new criterion for assessing discriminant validity in variance-based structural equation modeling. J Acad Mark Sci 43:115–135CrossRef
Zurück zum Zitat Herzenstein M, Sonenshein S, Dholakia UM (2011) Tell me a good story and i may lend you money: the role of narratives in peer-to-peer lending decisions. J Mark Res 48:138–149CrossRef Herzenstein M, Sonenshein S, Dholakia UM (2011) Tell me a good story and i may lend you money: the role of narratives in peer-to-peer lending decisions. J Mark Res 48:138–149CrossRef
Zurück zum Zitat Hoffrage U, Hafenbrädl S, Marewski JN (2018) The fast-and-frugal heuristics program. The Routledge international handbook of thinking and reasoning. Routledge, New York, pp 325–345 Hoffrage U, Hafenbrädl S, Marewski JN (2018) The fast-and-frugal heuristics program. The Routledge international handbook of thinking and reasoning. Routledge, New York, pp 325–345
Zurück zum Zitat Huang J-W, Lin C-P (2011) To stick or not to stick: the social response theory in the development of continuance intention from organizational cross-level perspective. Comput Hum Behav 27:1963–1973CrossRef Huang J-W, Lin C-P (2011) To stick or not to stick: the social response theory in the development of continuance intention from organizational cross-level perspective. Comput Hum Behav 27:1963–1973CrossRef
Zurück zum Zitat Inbar Y, Cone J, Gilovich T (2010) People’s intuitions about intuitive insight and intuitive choice. J Pers Soc Psychol 99:232–247CrossRef Inbar Y, Cone J, Gilovich T (2010) People’s intuitions about intuitive insight and intuitive choice. J Pers Soc Psychol 99:232–247CrossRef
Zurück zum Zitat Jago AS (2019) Algorithms and authenticity. Acad Manag Discov 5:38–56CrossRef Jago AS (2019) Algorithms and authenticity. Acad Manag Discov 5:38–56CrossRef
Zurück zum Zitat Janiesch C, Fischer M, Winkelmann A, Nentwich V (2019) Specifying autonomy in the internet of things: the autonomy model and notation. Inf Syst E Bus Manag 17:159–194CrossRef Janiesch C, Fischer M, Winkelmann A, Nentwich V (2019) Specifying autonomy in the internet of things: the autonomy model and notation. Inf Syst E Bus Manag 17:159–194CrossRef
Zurück zum Zitat Joinson AN (2001) Self-disclosure in computer-mediated communication: the role of self-awareness and visual anonymity. Eur J Soc Psychol 31:177–192CrossRef Joinson AN (2001) Self-disclosure in computer-mediated communication: the role of self-awareness and visual anonymity. Eur J Soc Psychol 31:177–192CrossRef
Zurück zum Zitat Jung D, Dorner V, Glaser F, Morana S (2018) Robo-advisory. bus inf. Syst Eng 60:81–86 Jung D, Dorner V, Glaser F, Morana S (2018) Robo-advisory. bus inf. Syst Eng 60:81–86
Zurück zum Zitat Kahn PH, Ishiguro H, Friedman B, Kanda T (2006) What is a human? Toward psychological benchmarks in the field of human-robot interaction. In: 15th IEEE international symposium on robot and human interactive communication, pp 364–371. IEEE Kahn PH, Ishiguro H, Friedman B, Kanda T (2006) What is a human? Toward psychological benchmarks in the field of human-robot interaction. In: 15th IEEE international symposium on robot and human interactive communication, pp 364–371. IEEE
Zurück zum Zitat Knijnenburg BP, Willemsen MC (2016) Inferring capabilities of intelligent agents from their external traits. ACM Trans Interact Intell Syst 6:1–25CrossRef Knijnenburg BP, Willemsen MC (2016) Inferring capabilities of intelligent agents from their external traits. ACM Trans Interact Intell Syst 6:1–25CrossRef
Zurück zum Zitat Komiak B (2006) The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Q 30:941CrossRef Komiak B (2006) The effects of personalization and familiarity on trust and adoption of recommendation agents. MIS Q 30:941CrossRef
Zurück zum Zitat Lankton N, McKnight DH, Tripp J (2015) Technology, humanness, and trust: rethinking trust in technology. J Assoc Inf Syst 16:880–918 Lankton N, McKnight DH, Tripp J (2015) Technology, humanness, and trust: rethinking trust in technology. J Assoc Inf Syst 16:880–918
Zurück zum Zitat Loewenstein G, Small DA (2007) The scarecrow and the tin man: the vicissitudes of human sympathy and caring. Rev Gen Psychol 11:112–126CrossRef Loewenstein G, Small DA (2007) The scarecrow and the tin man: the vicissitudes of human sympathy and caring. Rev Gen Psychol 11:112–126CrossRef
Zurück zum Zitat Logg JM, Minson JA, Moore DA (2019) Algorithm appreciation: people prefer algorithmic to human judgment. Organ Behav Hum Processes 151:90–103CrossRef Logg JM, Minson JA, Moore DA (2019) Algorithm appreciation: people prefer algorithmic to human judgment. Organ Behav Hum Processes 151:90–103CrossRef
Zurück zum Zitat Longoni C, Bonezzi A, Morewedge CK (2019) Resistance to medical artificial intelligence. J Consum Res 46:629–650CrossRef Longoni C, Bonezzi A, Morewedge CK (2019) Resistance to medical artificial intelligence. J Consum Res 46:629–650CrossRef
Zurück zum Zitat Maedche A, Legner C, Benlian A, Berger B, Gimpel H, Hess T, Hinz O, Morana S, Söllner M (2019) AI-based digital assistants. Bus Inf Syst Eng 61:535–544CrossRef Maedche A, Legner C, Benlian A, Berger B, Gimpel H, Hess T, Hinz O, Morana S, Söllner M (2019) AI-based digital assistants. Bus Inf Syst Eng 61:535–544CrossRef
Zurück zum Zitat Martinez LMF, Zeelenberg M, Rijsman JB (2011) Behavioural Consequences of regret and disappointment in social bargaining games. Cogn Emot 25:351–359CrossRef Martinez LMF, Zeelenberg M, Rijsman JB (2011) Behavioural Consequences of regret and disappointment in social bargaining games. Cogn Emot 25:351–359CrossRef
Zurück zum Zitat Meehl PE (1954) Clinical versus statistical prediction: a theoretical analysis and a review of the evidence. University of Minnesota Press, MinneapolisCrossRef Meehl PE (1954) Clinical versus statistical prediction: a theoretical analysis and a review of the evidence. University of Minnesota Press, MinneapolisCrossRef
Zurück zum Zitat Monroe AE, Brady GL, Malle BF (2017) This isn’t the free will worth looking for. Soc Psychol Pers Sci 8:191–199CrossRef Monroe AE, Brady GL, Malle BF (2017) This isn’t the free will worth looking for. Soc Psychol Pers Sci 8:191–199CrossRef
Zurück zum Zitat Moon Y (2000) Intimate exchanges: using computers to elicit Self-disclosure from consumers. J Consum Res 26:323–339CrossRef Moon Y (2000) Intimate exchanges: using computers to elicit Self-disclosure from consumers. J Consum Res 26:323–339CrossRef
Zurück zum Zitat Moore P, Piwek L (2017) Regulating wellbeing in the brave new quantified workplace. Empl Relat 39:308–316CrossRef Moore P, Piwek L (2017) Regulating wellbeing in the brave new quantified workplace. Empl Relat 39:308–316CrossRef
Zurück zum Zitat Moss TW, Neubaum DO, Meyskens M (2015) The Effect of virtuous and entrepreneurial orientations on microfinance lending and repayment: a signaling theory perspective. Entrep Theory Pract 39:27–52CrossRef Moss TW, Neubaum DO, Meyskens M (2015) The Effect of virtuous and entrepreneurial orientations on microfinance lending and repayment: a signaling theory perspective. Entrep Theory Pract 39:27–52CrossRef
Zurück zum Zitat Sirajum Munir, John A. Stankovic, Chieh-Jan Mike Liang, Shan Lin (2013) Cyber physical system challenges for human-in-the-loop control. In: 8th international workshop on feedback computing Sirajum Munir, John A. Stankovic, Chieh-Jan Mike Liang, Shan Lin (2013) Cyber physical system challenges for human-in-the-loop control. In: 8th international workshop on feedback computing
Zurück zum Zitat Nahmias E, Shepard J, Reuter S (2014) It’s Ok if ‘my brain made me do it’: people’s intuitions about free will and neuroscientific prediction. Cognition 133:502–516CrossRef Nahmias E, Shepard J, Reuter S (2014) It’s Ok if ‘my brain made me do it’: people’s intuitions about free will and neuroscientific prediction. Cognition 133:502–516CrossRef
Zurück zum Zitat Nass C, Steuer J, Tauber ER (1994) Computers are social actors. In: Adelson B (ed) Proceedings of the SIGCHI conference on human factors in computing systems. ACM, New York Nass C, Steuer J, Tauber ER (1994) Computers are social actors. In: Adelson B (ed) Proceedings of the SIGCHI conference on human factors in computing systems. ACM, New York
Zurück zum Zitat Nass C, Moon Y (2000) Machines and mindlessness: social responses to computers. J Soc Issues 56:81–103CrossRef Nass C, Moon Y (2000) Machines and mindlessness: social responses to computers. J Soc Issues 56:81–103CrossRef
Zurück zum Zitat Norton MI, Mochon D, Ariely D (2012) The ikea effect: when labor leads to love. J Consum Psychol 22:453–460CrossRef Norton MI, Mochon D, Ariely D (2012) The ikea effect: when labor leads to love. J Consum Psychol 22:453–460CrossRef
Zurück zum Zitat Önkal D, Goodwin P, Thomson M, Gönül S, Pollock A (2009) The relative influence of advice from human experts and statistical methods on forecast adjustments. J Behav Decis Mak 22:390–409CrossRef Önkal D, Goodwin P, Thomson M, Gönül S, Pollock A (2009) The relative influence of advice from human experts and statistical methods on forecast adjustments. J Behav Decis Mak 22:390–409CrossRef
Zurück zum Zitat Palmeira M, Spassova G (2015) Consumer Reactions to Professionals who use Decision Aids. Eur J Mark 49:302–326CrossRef Palmeira M, Spassova G (2015) Consumer Reactions to Professionals who use Decision Aids. Eur J Mark 49:302–326CrossRef
Zurück zum Zitat Parasuraman R, Sheridan TB, Wickens CD (2000) A model for types and levels of human interaction with automation. IEEE T Syst Man Cybern A 30:286–297CrossRef Parasuraman R, Sheridan TB, Wickens CD (2000) A model for types and levels of human interaction with automation. IEEE T Syst Man Cybern A 30:286–297CrossRef
Zurück zum Zitat Pavalou PA (2003) Consumer acceptance of electronic commerce: integrating trust and risk with the technology acceptance model. Int J Electron Comm 7:101–134CrossRef Pavalou PA (2003) Consumer acceptance of electronic commerce: integrating trust and risk with the technology acceptance model. Int J Electron Comm 7:101–134CrossRef
Zurück zum Zitat Pavey L, Greitemeyer T, Sparks P (2012) “I help because i want to, not because you tell me to”: empathy increases autonomously motivated helping. Pers Soc Psychol B 38:681–689CrossRef Pavey L, Greitemeyer T, Sparks P (2012) “I help because i want to, not because you tell me to”: empathy increases autonomously motivated helping. Pers Soc Psychol B 38:681–689CrossRef
Zurück zum Zitat Penner LA (2002) Dispositional and organizational influences on sustained volunteerism: an interactionist perspective. J Soc Issues 58:447–467CrossRef Penner LA (2002) Dispositional and organizational influences on sustained volunteerism: an interactionist perspective. J Soc Issues 58:447–467CrossRef
Zurück zum Zitat Pfeiffer J, Pfeiffer T, Meißner M, Weiß E (2020) Eye-tracking-based classification of information search behavior using machine learning: evidence from experiments in physical shops and virtual reality shopping environments. Inf Syst Res 31:675–691CrossRef Pfeiffer J, Pfeiffer T, Meißner M, Weiß E (2020) Eye-tracking-based classification of information search behavior using machine learning: evidence from experiments in physical shops and virtual reality shopping environments. Inf Syst Res 31:675–691CrossRef
Zurück zum Zitat Pfeiffer J, Benbasat I, Rothlauf F (2014) Minimally restrictive decision support systems. In: Proceedings of the international conference on information systems, Auckland Pfeiffer J, Benbasat I, Rothlauf F (2014) Minimally restrictive decision support systems. In: Proceedings of the international conference on information systems, Auckland
Zurück zum Zitat Picard RW (2003) Affective computing: challenges. Int J Hum-Comp St 59:55–64CrossRef Picard RW (2003) Affective computing: challenges. Int J Hum-Comp St 59:55–64CrossRef
Zurück zum Zitat Prahl A, van Swol L (2017) Understanding algorithm aversion: when is advice from automation discounted? J Forecast 36:691–702CrossRef Prahl A, van Swol L (2017) Understanding algorithm aversion: when is advice from automation discounted? J Forecast 36:691–702CrossRef
Zurück zum Zitat Qiu L, Benbasat I (2009) Evaluating anthropomorphic product recommendation agents: a social relationship perspective to designing information systems. J Manag Inform Syst 25:145–182CrossRef Qiu L, Benbasat I (2009) Evaluating anthropomorphic product recommendation agents: a social relationship perspective to designing information systems. J Manag Inform Syst 25:145–182CrossRef
Zurück zum Zitat Rader E, Cotter K, Cho J (2018) Explanations as mechanisms for supporting algorithmic transparency. In: Mandryk R, Hancock M (eds) Engage with CHI. Proceedings of the 2018 CHI conference on human factors in computing systems, Montréal. ACM, New York Rader E, Cotter K, Cho J (2018) Explanations as mechanisms for supporting algorithmic transparency. In: Mandryk R, Hancock M (eds) Engage with CHI. Proceedings of the 2018 CHI conference on human factors in computing systems, Montréal. ACM, New York
Zurück zum Zitat Ruttan RL, Lucas BJ (2018) Cogs in the machine: the prioritization of money and self-dehumanization. Organ Behav Hum Dec Process 149:47–58CrossRef Ruttan RL, Lucas BJ (2018) Cogs in the machine: the prioritization of money and self-dehumanization. Organ Behav Hum Dec Process 149:47–58CrossRef
Zurück zum Zitat Ryan RM, Connell JP (1989) Perceived locus of causality and internalization: examining reasons for acting in two domains. J Pers Soc Psychol 57:749–761CrossRef Ryan RM, Connell JP (1989) Perceived locus of causality and internalization: examining reasons for acting in two domains. J Pers Soc Psychol 57:749–761CrossRef
Zurück zum Zitat Sargeant A, Ford JB, West DC (2006) Perceptual determinants of nonprofit giving behavior. J Bus Res 59:155–165CrossRef Sargeant A, Ford JB, West DC (2006) Perceptual determinants of nonprofit giving behavior. J Bus Res 59:155–165CrossRef
Zurück zum Zitat Sarker S, Chatterjee S, Xiao X, Elbanna A (2019) The sociotechnical axis of cohesion for the is discipline: its historical legacy and its continued relevance. MIS Q 43:695–719CrossRef Sarker S, Chatterjee S, Xiao X, Elbanna A (2019) The sociotechnical axis of cohesion for the is discipline: its historical legacy and its continued relevance. MIS Q 43:695–719CrossRef
Zurück zum Zitat Schuetzler RM, Grimes M, Giboney JS, Buckman J (2014) Facilitating natural conversational agent interactions: lessons from a deception experiment. In: Proceedings of the international conference on information systems, Auckland Schuetzler RM, Grimes M, Giboney JS, Buckman J (2014) Facilitating natural conversational agent interactions: lessons from a deception experiment. In: Proceedings of the international conference on information systems, Auckland
Zurück zum Zitat Schwartz SH (1992) Universals in the content and structure of values: theoretical advances and empirical tests in 20 countries. In: Advances in experimental social psychology, vol, 25. Elsevier Schwartz SH (1992) Universals in the content and structure of values: theoretical advances and empirical tests in 20 countries. In: Advances in experimental social psychology, vol, 25. Elsevier
Zurück zum Zitat Schwartz S (2013) Value priorities and behavior: applying. In: The psychology of values: the Ontario symposium , vol 8 Schwartz S (2013) Value priorities and behavior: applying. In: The psychology of values: the Ontario symposium , vol 8
Zurück zum Zitat Shaw LL, Batson CD, Todd RM (1994) Empathy avoidance: forestalling feeling for another in order to escape the motivational consequences. J Pers Soc Psychol 67:879–887CrossRef Shaw LL, Batson CD, Todd RM (1994) Empathy avoidance: forestalling feeling for another in order to escape the motivational consequences. J Pers Soc Psychol 67:879–887CrossRef
Zurück zum Zitat Shin D, Park YJ (2019) Role of Fairness, accountability, and transparency in algorithmic affordance. Comput Hum Behav 98:277–284CrossRef Shin D, Park YJ (2019) Role of Fairness, accountability, and transparency in algorithmic affordance. Comput Hum Behav 98:277–284CrossRef
Zurück zum Zitat Sinha R, Swearingen K (2001) Comparing recommendations made by online systems and friends In: Smeaton AF et al (eds) Proceedings of the 2nd DELOS network of excellence workshop on personalisation and recommender systems in digital libraries, Dublin Sinha R, Swearingen K (2001) Comparing recommendations made by online systems and friends In: Smeaton AF et al (eds) Proceedings of the 2nd DELOS network of excellence workshop on personalisation and recommender systems in digital libraries, Dublin
Zurück zum Zitat Slovic P, Finucane ML, Peters E, Macgregor DG (2006) The affect heuristic. In: Slovic P, Lichtenstein S (eds) The construction of preference. Cambridge University Press, Cambridge, pp 434–453CrossRef Slovic P, Finucane ML, Peters E, Macgregor DG (2006) The affect heuristic. In: Slovic P, Lichtenstein S (eds) The construction of preference. Cambridge University Press, Cambridge, pp 434–453CrossRef
Zurück zum Zitat Small DA, Cryder C (2016) Prosocial consumer behavior. Curr Opin Psychol 10:107–111CrossRef Small DA, Cryder C (2016) Prosocial consumer behavior. Curr Opin Psychol 10:107–111CrossRef
Zurück zum Zitat Small DA, Loewenstein G, Slovic P (2007) Sympathy and callousness: the impact of deliberative thought on donations to identifiable and statistical victims. Organ Behav Hum Dec Process 102:143–153CrossRef Small DA, Loewenstein G, Slovic P (2007) Sympathy and callousness: the impact of deliberative thought on donations to identifiable and statistical victims. Organ Behav Hum Dec Process 102:143–153CrossRef
Zurück zum Zitat Song T, Zheng W, Song P, Cui Z (2020) EEG Emotion recognition using dynamical graph convolutional neural networks. IEEE T Affect Comput 11:532–541CrossRef Song T, Zheng W, Song P, Cui Z (2020) EEG Emotion recognition using dynamical graph convolutional neural networks. IEEE T Affect Comput 11:532–541CrossRef
Zurück zum Zitat Swangnetr M, Kaber DB (2013) Emotional state classification in patient–robot interaction using wavelet analysis and statistics-based feature selection. IEEE Trans Hum Mach Syst 43:63–75CrossRef Swangnetr M, Kaber DB (2013) Emotional state classification in patient–robot interaction using wavelet analysis and statistics-based feature selection. IEEE Trans Hum Mach Syst 43:63–75CrossRef
Zurück zum Zitat Validi S, Bhattacharya A, Byrne PJ (2015) A solution method for a two-layer sustainable supply chain distribution model. Comput Oper Res 54:204–217CrossRef Validi S, Bhattacharya A, Byrne PJ (2015) A solution method for a two-layer sustainable supply chain distribution model. Comput Oper Res 54:204–217CrossRef
Zurück zum Zitat Weinstein N, Ryan RM (2010) When helping helps: autonomous motivation for prosocial behavior and its influence on well-being for the helper and recipient. J Pers Soc Psychol 98:222–244CrossRef Weinstein N, Ryan RM (2010) When helping helps: autonomous motivation for prosocial behavior and its influence on well-being for the helper and recipient. J Pers Soc Psychol 98:222–244CrossRef
Zurück zum Zitat Yadollahi A, Shahraki AG, Zaiane OR (2017) Current state of text sentiment analysis from opinion to emotion mining. ACM Comput Surv 50:1–33CrossRef Yadollahi A, Shahraki AG, Zaiane OR (2017) Current state of text sentiment analysis from opinion to emotion mining. ACM Comput Surv 50:1–33CrossRef
Zurück zum Zitat Yeomans M, Shah A, Mullainathan S, Kleinberg J (2019) Making sense of recommendations. J Behav Decis Making 32:403–414CrossRef Yeomans M, Shah A, Mullainathan S, Kleinberg J (2019) Making sense of recommendations. J Behav Decis Making 32:403–414CrossRef
Zurück zum Zitat Zhao X, Lynch JG, Chen Q (2010) Reconsidering baron and kenny: myths and truths about mediation analysis. J Consum Res 37:197–206CrossRef Zhao X, Lynch JG, Chen Q (2010) Reconsidering baron and kenny: myths and truths about mediation analysis. J Consum Res 37:197–206CrossRef
Zurück zum Zitat Zheng W-L, Lu B-L (2015) Investigating critical frequency bands and channels for eeg-based emotion recognition with deep neural networks. IEEE T Auton Ment Dev 7:162–175CrossRef Zheng W-L, Lu B-L (2015) Investigating critical frequency bands and channels for eeg-based emotion recognition with deep neural networks. IEEE T Auton Ment Dev 7:162–175CrossRef
Metadaten
Titel
When Self-Humanization Leads to Algorithm Aversion
What Users Want from Decision Support Systems on Prosocial Microlending Platforms
verfasst von
Pascal Oliver Heßler
Jella Pfeiffer
Sebastian Hafenbrädl
Publikationsdatum
01.07.2022
Verlag
Springer Fachmedien Wiesbaden
Erschienen in
Business & Information Systems Engineering / Ausgabe 3/2022
Print ISSN: 2363-7005
Elektronische ISSN: 1867-0202
DOI
https://doi.org/10.1007/s12599-022-00754-y

Weitere Artikel der Ausgabe 3/2022

Business & Information Systems Engineering 3/2022 Zur Ausgabe

Catchword

Esport