Zum Inhalt

Machines in the Middle: Using Artificial Intelligence (AI) While Offering Help Affects Warmth, Felt Obligations, and Reciprocity

  • Open Access
  • 31.10.2025
  • Original Paper

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Diese Studie untersucht die Auswirkungen des Einsatzes von KI-Tools wie ChatGPT auf Verhaltensweisen am Arbeitsplatz und konzentriert sich darauf, wie sie die Wahrnehmung von Wärme, empfundenen Verpflichtungen und Gegenseitigkeit beeinflusst. Durch eine Reihe von Experimenten zeigen die Forschungen, dass die explizite Anerkennung des Einsatzes von KI bei der Bereitstellung von Hilfe dazu führen kann, dass die Empfänger den Helfer als weniger warmherzig und kompetent betrachten. Diese Verringerung der empfundenen Wärme verringert in der Folge die empfundenen Verpflichtungen, die Hilfe zu erwidern, was sich auf die allgemeine Dynamik der Interaktionen mit den Mitarbeitern auswirkt. Die Studie untersucht auch die vermittelnde Rolle von Wärme in der Beziehung zwischen KI-unterstützter Hilfe und empfundenen Verpflichtungen und unterstreicht die Bedeutung zwischenmenschlicher Wärme in der Dynamik am Arbeitsplatz des digitalen Zeitalters. Darüber hinaus berücksichtigt die Studie die umfassenderen Auswirkungen der KI-Integration am Arbeitsplatz und diskutiert sowohl die Effizienzgewinne als auch die potenziellen relativen Kosten. Die Ergebnisse deuten darauf hin, dass KI-Werkzeuge zwar die individuelle Effizienz steigern können, aber auch den sozialen Kitt untergraben können, der für effektives staatsbürgerliches Verhalten in Organisationen von entscheidender Bedeutung ist. Die Studie schließt mit der Aufforderung an Manager und Organisationen, die sozialen Auswirkungen bei der Integration von KI am Arbeitsplatz zu berücksichtigen, und betont die Notwendigkeit eines durchdachten Ansatzes, der Effizienz mit der Erhaltung zwischenmenschlicher Beziehungen in Einklang bringt.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Extensive research over several decades has established that workplace helping contributes to both individual and collective outcomes (Bergeron, 2007; Bergeron et al., 2013; Detert et al., 2013; Farh et al., 1997; Podsakoff et al., 2000, 2009). Help involves giving assistance to another person on an organizationally relevant problem at work (e.g., orienting new employees, supporting a colleague with an assignment; Dalal & Sheng, 2019; Smith et al., 1983). Although most research illustrates benefits for both helpers and recipients, recently, researchers have examined the conditions under which help recipients react negatively to those who assisted them (Bolino et al., 2018; Bottom et al., 2006; Goldstein et al., 2011; Hughes et al., 2022; MacKenzie et al., 2018; Thompson & Bolino, 2018). For instance, help recipients may view helping behaviors as intrusive (Goldsmith & Fitch, 1997; Landis et al., 2022) or feel threatened by a resulting sense of inadequacy (Ashford et al., 2003; Chernyak-Hai et al., 2023; Harari et al., 2022; Lee et al., 2023; Tai et al., 2023). Still, the dominant narrative in the literature views workplace helping as positive, often driven by a strong sense of reciprocity as described in social exchange theories that characterize most human relationships.
The social exchange–based explanations for responses to workplace helping are based primarily on research involving human-to-human interactions. However, the evolving nature of contemporary work environments, particularly with the advent of artificial intelligence (AI), may be reshaping critical aspects of cooperation, including the nature of helping. Digital technology powered by artificial intelligence is increasingly available and capable of assisting individuals with their interpersonal behaviors and workplace tasks (Choi & Schwarcz, 2023; Choudhury et al., 2020; Tang et al., 2023). The capabilities of AI to produce human-like work have improved rapidly (Dell’Acqua et al., 2023), especially since the release of OpenAI’s ChatGPT, one of several large language models (LLMs) that are widely available for public use. As AI capabilities overlap more with those of humans, the integration of human work with AI poses new fundamental challenges and opportunities. These tools can generate solutions that help users achieve their personal and relational goals, such as helping, through computational processes (Hancock et al., 2020; Liu et al., 2022). However, delegating one’s actions to AI may run the risk of diluting the social fabric that theory articulates as fundamental to smooth organizational functioning (Organ, 1988). This prompts a simple question motivating our research: How do recipients respond to coworkers who use an AI tool when providing help?
We address this question by examining the effects of using an AI tool (specifically, an LLM) on recipient attributions and the reciprocal dynamics of helping. Specifically, we study whether someone reciprocates help to a coworker who, in the past, used an AI tool to provide initial assistance. Like sending an e-gift card rather than a hand-written birthday note, we predict that people will view such coworkers as cold (and in turn not reciprocate). We integrate insights from social exchange theory, social perception, and emerging perspectives on technology-mediated interactions to construct our framework and develop hypotheses. Across several studies—a behavioral experiment and a vignette study (as well as a replication)—we test and find patterns consistent with our predictions: using an AI tool during a helping event leads recipients to view the helper as less warm, and warmth mediates the relationship between AI tool use and felt obligations, or the internalized sense of duty or responsibility an individual feels to reciprocate or respond positively to the actions from another party. Individuals with greater perceptions of warmth feel more obligations to reciprocate help, and as a result, they are more likely to help when asked. While our main predictions focus on warmth, we also assess competence (as a research question), in alignment with the dual-dimension framework of person perception theory.
Our research makes several theoretical and practical contributions. First, we expand the current research on helping to consider how AI-augmented actions influence recipient perceptions and behaviors. We accomplish this by drawing from and extending theory on technology-mediated relational maintenance (Duck, 1985; Taylor et al., 2022; Tong & Walther, 2011). Distinguishing between help offered with or without the use of an AI tool is critical because it may tilt recipient perceptions of warmth, as suggested by our findings. This point connects to a larger conversation on the execution gains versus relational costs of integrating AI into the workplace (Tang et al., 2023), where downstream effects on coworker cooperation may be more costly than recognized. Second, we integrate social exchange theory (Blau, 1964; Emerson, 1976) with theories of technology-mediated collaboration to suggest that the medium through which help is provided (Connelly et al., 2011; Daft & Lengel, 1986) can influence the social obligation to reciprocate. This adaptation of social exchange theory to include technology-mediated interactions reflects the evolving nature of workplace relationships in the digital era. It underscores the importance of understanding how technological tools, despite their efficiency, can impact the fundamental social mechanisms that drive cooperative and reciprocal behaviors in organizations. Our research enriches social exchange theory, traditionally centered on human-to-human interactions, by introducing technology, specifically AI, as a pivotal element in workplace social exchanges. By identifying warmth as a mediator in the relationship between received help and the obligation to reciprocate, the study builds on social perception theory (Fiske et al., 2002), emphasizing the role of interpersonal warmth in the digital age and its effect on workplace dynamics. Finally, we contribute to the literature on human–computer interaction by looking beyond how people respond to AI itself. While past work has focused on algorithm aversion, trust in AI, and technology acceptance (e.g., Dietvorst & Bharti, 2020; Logg et al., 2019), our study explores a unique context in which AI does not replace a person, but instead assists one individual in helping another. By doing so, we extend social psychological theories of reciprocity and warmth into emerging domains where AI serves as a behind-the-scenes partner in workplace relationships, revealing how even indirect uses of AI may have relational consequences.
To help frame what follows, in this work, we concentrate on interpersonal helping behaviors, widely regarded as the prototypical form of organizational citizenship behavior (OCB; Organ et al., 2006). Our focus is on these interpersonal OCBs (helping behaviors directed toward individuals: OCB-I) rather than the organization itself, such as attending optional events that enhance the organization’s image (Lee & Allen, 2002; McNeely & Meglino, 1994), because the context of our model is coworker–coworker exchanges. Helping is recognized as a fundamental form of OCB, representing affiliative OCBs that encompass both task and person-oriented help (e.g., courtesy toward newcomers by welcoming them into the group, taking on additional responsibilities to assist a colleague; Settoon & Mossholder, 2002). Moreover, our studies focus on explicit AI use, in which the helper clearly acknowledges using an AI tool (e.g., ChatGPT). Although generative AI tools can sometimes be difficult to detect without disclosure, our goal was to isolate the interpersonal consequences of known AI use—where recipients are explicitly informed that AI was involved. Thus, our findings are most relevant to workplace contexts in which AI use is transparent and disclosed, rather than hidden or passively inferred.

Theory and Hypothesis Development

Workplace Helping and Felt Obligations in Social Exchange

A central proposition of social exchange theory is that interdependent and contingent interpersonal interactions generate obligations toward others (Emerson, 1976). In work settings, employees establish unique exchange relationships with coworkers (such as subordinates, colleagues, and superiors), and individuals feel obligated to return the benefits they receive from others (Cropanzano & Mitchell, 2005). This compulsion is created by the norm of reciprocity, which encompasses the expected or “rule-based” behavior of reciprocating toward other social participants (Blau, 1964; Gouldner, 1960). The compulsion to reciprocate (i.e., “repay”) emerges when individuals, having internalized the norm of reciprocity as a moral standard, respond to the obligation created by previously received benefits (Korsgaard et al., 2010; Perugini et al., 2003). The concept of felt obligations has been frequently cited as a primary factor connecting received help to enacted or reciprocated help (Ehrhart, 2018; Lester et al., 2008; Organ, 1990). Eisenberger et al. (2001) applied social exchange theory in explaining employee–employer dynamics, suggesting that an employee’s felt obligation to their organization mediates the positive link between perceived organizational support and employee outcomes (like affective commitment and in-role performance). Following Eisenberger’s research, numerous studies employing social exchange theory concentrated on felt obligation as a key variable in the social exchange process between leaders and followers (Duan et al., 2022; Garba et al., 2018; Pan et al., 2012). More recently, studies have examined felt obligations between employees and their coworkers (Dierdorff & Rubin, 2022). As stated in Dierdorff and Rubin (2022), “the theoretical supposition that the norm of reciprocity is a causal mechanism underlying OCB is nearly axiomatic in extant scholarship” (p. 263).

Reciprocity in AI Contexts

How might using AI affect the reciprocity that typically develops between humans? To reason about its likely impact on recipients’ reactions, we integrate social exchange with theories of technology-mediated interactions in relational building (Duck, 1985; Taylor et al., 2022; Tong & Walther, 2011). These theories indicate that people monitor and evaluate discretionary investments from others in technology-mediated interpersonal contexts to discern whether someone’s behavior is directed at providing a fair and equitable relationship, even if the individuals have little experience with one another (Sigman, 1991). Moreover, investments in technology-mediated interaction contexts include one’s choice of medium as well as communication amount and frequency (Sosik & Bazarova, 2014). A common finding is that individuals using “lightweight” tools that minimize the resource costs of engaging with others are perceived less positively compared to those using more intensive tools and investing more effort (Walther, 2011). This perception stems partly from the fact that technology-mediated interaction is not merely conveyed by the technology itself but can also be modified, augmented, or even created by computational agents to achieve one’s personal or relational goals (Hancock et al., 2020). For example, it takes less time to click the “like” button than to compose and post a comment on a friend’s photo. As such, “likes” are more lightweight as they require less effort to express a sentiment compared to messages and photo comments (Mansson & Myers, 2011). In this way, selection of one type of medium over another can be a “signal of the resources one is willing to commit” to the relationship or interaction (Donath, 2007, p. 238), which can impact perceptions of equity within the interpersonal setting (Tong & Walther, 2011). As another example, the act of writing a letter, which takes time and dedication, is seen as more meaningful than routine text messages (King & Forlizzi, 2007) and considered more personal and valuable than emails (Riche et al., 2010). The explanation of these and similar findings is that the effort and perceived sincerity of the message and selected medium shapes perceptions of “the engagement of people in the conversation and hence in the relationship” (Riche et al., 2010, p. 2709). Consequently, technologies that lower one’s cost in engaging in interpersonal activities may also dilute relational assessments important for generating felt obligations toward others (Liu et al., 2023). Although much of the extant research has focused on friendships or romantic partners, similar findings in workplace-relevant interactions indicate they likely apply to workplace relationships as well. For instance, Brodsky (2021) finds that negotiators’ choice of medium impacts their counterparts’ perceptions of their emotional authenticity, where choosing more lightweight communication channels led them to seem less authentic, demonstrating that coworkers may use others’ choice of medium to gauge investment and perceptions of social exchange.
We build on and extend this work on technology-mediated relational building and the distinction between lightweight and heavyweight technology in interpersonal interactions to more deeply explore the resulting interpersonal perceptions and implications for social exchange. With the growth in AI capabilities, such as the rapid growth of large language models like ChatGPT, there are increasing opportunities for individuals to delegate tasks to AI-based tools that were, until recently, solely within the provenance of human ability (Vanneste & Puranam, 2024). Now, employees can delegate many tasks to AI-based tools, requiring less time and cognitive load from the human user (Hancock et al., 2020). AI capabilities are rapidly expanding in many domains: Physicians can delegate some of their clinical decision making to AI systems (Sun et al., 2023); software developers can delegate their coding tasks to ChatGPT (Ahmad et al., 2023); employees can use AI to modify or even generate messages for them (Hancock et al., 2020); and executives can use LLMs to generate and edit apology letters to stakeholders and fans when products underperform (Bevan, 2023). In the context of help giving and receiving, a coworker offering help may off-load some of his/her contribution to AI tools, such as ChatGPT (Liu et al., 2023; Wei et al., 2023). As a result, help recipients may value such help differently, in part due to the altered perception of social exchange toward the helper.
Although extant work on technology-mediated relational building offers some explanation as to why some media choices are perceived differently than others, we do not know about the specific perceptions that recipients develop and their implications for the felt obligations that are normally experienced in response to helping behavior. To further develop theory in this area, we draw from Fiske’s theory of social perception, or the stereotype content model, to theorize about the responses that AI-enabled helping behavior may elicit in recipients. Specifically, we propose that whether a coworker uses an AI tool when providing help may affect the recipient’s perceptions of his/her warmth.
Warmth perceptions reflect evaluations of an actor as nice, friendly, and pleasant, and they are a foundational component of person perception across cultures and situations (Eisenbruch & Krasnow, 2022; Fiske et al., 2002; Inagaki & Eisenberger, 2013; Wojciszke et al., 1998). Theories of person perception suggest that the level of warmth observers attribute to a particular actor is based on inferences about motive: someone thought to be benevolent is likely viewed as warm, particularly compared to someone perceived as hostile or competitive (Cuddy et al., 2008). Cuddy et al. (2011) posit that warmth is especially relevant in organizations because of the common forms of interdependence connecting coworkers. They argue that employees witness the actions of their coworkers and subsequently make judgments about whether the actor wishes them well or harm (e.g., Huang et al., 2017; Wellman et al., 2016).
Building on this theoretical foundation and integrating it with existing theories of technology-mediated relationships, we suggest that applying an AI tool in the context of a helping event is likely to undermine the foundations of social exchange. Specifically, we predict that individuals who delegate their actions to an AI tool when providing help should be viewed as less warm by recipients, because they will be seen as deploying a lightweight technology signaling lower investment and effort relative to individuals who do not use an AI tool. As previously discussed, costs are important in fair and equitable social exchanges. As the AI-aided help sender spends few costs during his/her actions, the recipient may become less inclined to view the sender as “going out of their way” when providing help. For example, in a study by Liu et al. (2023), participants were asked to imagine texting with another individual about a variety of interpersonal contexts (e.g., experiencing burnout and needing support, having a conflict with a colleague and needing advice). When participants learned that their partner used AI to augment his/her text responses back to participants, they reported significantly lower relationship satisfaction. We predict that, in a helping context, using an AI tool transparently will affect recipient perceptions of warmth toward the helper.
Hypothesis 1
Individuals will perceive coworkers who offer help using an AI tool as significantly less warm relative to coworkers who offer help without using an AI tool.
We further expect warmth to predict felt obligations. Research on Fiske’s (2018) person perception model demonstrates a causal connection between warmth perceptions and subsequent emotions and intentions. Accumulated evidence supports the theory that in social situations, individuals assess whether an actor is high or low in warmth and then experience emotions, intentions, plans, and cognitive evaluations in light of that information (Cuddy et al., 2007; Wojciszke, 1994). Moreover, when people experience warmth from others, it often engenders positive emotions like gratitude and empathy (Lee et al., 2019; Sawyer et al., 2022). These emotions reinforce felt obligations as individuals are naturally inclined to act in a way that preserves and amplifies the positive emotional experiences.
We connect those findings to research demonstrating that felt obligations arise from a complex set of emotional and cognitive responses that individuals experience in response to perceived warmth and friendliness in social interactions, leading to an internalized sense of duty or responsibility to reciprocate positive behavior and maintain a positive social contract with others (Eisenberger et al., 2001). In connecting these two bodies of work, we theorize that help recipients perceiving high warmth in a helper are likely to experience greater felt obligations stemming from their natural inclination to uphold the positive social contract. Empirical evidence lends support to this prediction. For instance, warmth often leads to helping intentions (Simon et al., 2020), liking (Huang et al., 2017), desires to assist others (Levine & Schweitzer, 2015), obligations to purchase from sales targets (Wang et al., 2017), and when the target is a company, brand loyalty (Kervyn et al., 2022). We therefore hypothesize that perceptions of warmth (a) positively relate to felt obligations and (b) mediate the relationship between AI-assisted helping behavior and felt obligations.
Hypothesis 2
Warmth is positively related to felt obligations.
Hypothesis 3
Warmth mediates the relationship between AI tool use during received help and felt obligations, such that AI tool use will have a negative indirect effect on felt obligations through warmth.
Felt obligations (within the recipient), in turn, are expected to predict enacted help. Understanding why individuals reciprocate help is tied to the concept of reciprocal obligation. Interpersonal OCBs are fundamental in contributing to the workplace’s social environment and in fulfilling the social duties arising from interpersonal exchanges (Cropanzano & Mitchell, 2005; Organ, 1988). For example, research generally supports the notion that individuals offer more helping behaviors when they work in contexts of high-quality social exchanges—social exchanges either with their leaders (Eisenberger et al., 2001), the organization in general (Masterson et al., 2000), or after receiving assistance from coworkers (Deckop et al., 2003). Helping behaviors are an important way that individuals can fulfill obligations to reciprocate benefits received from others, especially if those benefits are in the form of assistance or support (Bowling et al., 2005). The felt pressure to reciprocate reflects a norm of reciprocity and stems from the dissonance that is created by the inequity of the social exchange (Adams, 1965). Individuals typically strive to resolve this dissonance by allocating resources to the relationship (Perugini et al., 2003) so as to avoid debt in social exchanges (Blau, 1964). Along these lines, research has found that when individuals feel that they are receiving more than reciprocating, they tend to increase cooperation (Clark & Sefton, 2001) and enact more helping behaviors (Halbesleben & Wheeler, 2011). The above theory and related research suggest that felt obligations should prompt individuals to enact help.
Hypothesis 4
Felt obligations are positively related to enacted help.
We based our main predictions on warmth, following Eisenbruch and Krasnow’s (2022) arguments that people react more strongly to warmth than competence in social and interpersonal contexts. However, we also consider competence as an exploratory part of the study. Competence perceptions reflect evaluations of an actor as capable, intelligent, and effective, and they represent the second foundational dimension of person perception across cultures and domains (Fiske et al., 2002; Wojciszke et al., 1998). Theories of person perception suggest that competence judgments are rooted in inferences about ability: observers assess whether the actor is skilled enough to achieve their goals or perform effectively in a given context (Cuddy et al., 2008). While warmth tends to dominate early impressions and emotional reactions, competence plays a critical role in shaping perceptions of status, influence, and task performance (Fiske et al., 2007; Judd et al., 2005). As employees observe the outcomes of others’ actions—such as whether a task is completed successfully and with sufficient quality—they infer levels of competence that shape how they evaluate coworkers and decide who deserves credit, deference, or future opportunities (Anderson & Kilduff, 2009). Relying on an AI tool to provide help may influence perceptions of competence. For example, some recipients may interpret AI use as a sign of efficiency and technological savvy—an indicator that the helper is skilled at using modern tools to solve problems. At other times, however, AI use may be a sign of limited ability, raising doubts about the helper’s underlying competence. Therefore, we also examine the following research question:
  • Research question: How does offering help using an AI tool affect recipient perceptions of competence?

Overview of Studies

We conducted three studies to investigate our predictions. Figure 1 displays our conceptual model. We designed study 1 to assess our full theoretical model in a behavioral setting. We designed study 3 to replicate study 1 while controlling for alternative mediators (study 3 is reported in Appendix 2). In Study 2, we used a vignette experiment to conceptually replicate Hypotheses 1, 2, and 4 while controlling for a host of relevant, technology-related predictors (Parasuraman & Colby, 2015), which were included due to their implications for attitudes toward technology (Cai, Fan, & Du, 2017). All studies received ethics approval under a University Institutional Review Board (#STUDY2022_00000336). Moreover, we designed these studies to ensure robustness across two well-documented forms of OCB-I: task versus person-oriented help (Settoon & Mossholder, 2002). Person-oriented help involves supportive but not required courtesy toward others, and it often takes the form of going out of one’s way to make another feel welcomed in the work group, listening and being accessible, or orienting new coworkers. In contrast, task-focused help involves the resolution of work-related problems, such as taking on additional responsibilities in order to help someone who is overburdened. A coworker could enact person OCB-I with or without using an AI tool, such as by offering a polite, encouraging, welcoming note to an incoming teammate (with or without ChatGPT). A coworker could also enact task OCB-I with or without using an AI tool, such as by taking care of extra work to relieve workload on the focal individual. Because this task/person distinction is well documented in the literature, we incorporate it into our investigation for robustness. Specifically, we explore whether help type (task versus person) moderates the effect of AI use on perceived warmth (Fig. 1).
Fig. 1
Theoretical model
Bild vergrößern

Study 1

Sample

Participants were 371 individuals from the online research platform Prolific. They were required to work full or part time, have an approval rating of at least 98%, speak English as their primary language, and be citizens of the United States. The average age of participants was 37.7 years (SD = 11.9), and 48% were female. Most participants were White (71%) and held a college degree or higher (66%). We recruited 400 total participants, but 7 failed attention checks and 22 did not complete the study beyond the training sequence, yielding 371 as our final sample.

Experimental Setting, Procedure, and Conditions

We conducted a preregistered, online behavioral experiment to examine whether providing assistance using an AI tool Changes perceptions, felt obligations, and subsequent enacted help. Participants were randomly assigned to one of four conditions that resulted in a 2× 2 between-subjects design, with AI tool use (coworker did or did not use ChatGPT when offering help) and help type (person versus task) as factors. Our preregistration can be found at https://​osf.​io/​ayudn?​view_​only=​f00a76754d3c495b​9f9d359e279fa76b​.
Participants were introduced to an experimental task involving a scenario in which they were asked to work as a member of a caregiving team, offering assistance, support, and problem solving when necessary to a care recipient. When designing this task, we adhered to principles described in Colquitt (2008). A thorough description of the task, which is a work simulation in which participants offer care support to an online recipient, is provided in Appendix 1.
Participants were informed that they would be working as a member of a caregiving team watching over a care recipient named Turto. Members of the team worked in shifts so that one individual was with the care recipient 24 hours a day. Participants were told that they would be picking up where the last caregiver finished. After being introduced to their role and team context, they completed a training sequence to learn how to support the care recipient online. Specifically, they needed to manage Turto’s symptoms by feeding and washing him and providing medicine at appropriate intervals. They were provided an opportunity to practice all necessary actions.
After completing the training, participants were told that one of their experienced teammates, Caregiver J, may send them messages during their shift. They were instructed that they could choose to respond or not, with no implications for their compensation for participation. Participants viewed the first message by Caregiver J before beginning their shift, which included the manipulation of AI tool use and help type. Person-oriented help took the form of an encouraging welcome message, trying to make the participant, as a new member of the care team, feel comfortable, whereas task-oriented help involved Caregiver J reporting on a task they took care of—writing a detailed activity report—on the participants’ behalf. To manipulate whether Caregiver J used an AI tool during the help event, participants saw a message that included a side comment such as “I used ChatGPT!” in describing their help or else made no mention of technological tool use. Specifically, participants received one of the following messages from Caregiver J prior to starting their shift:
No AI Person Help: “Hello – just wanted to say welcome to the team! It’s fantastic to have you with us, and we’re excited to get to work with you. We’re here to support you as we’ve all been in your shoes. Welcome aboard and stoked to have you with us!”
AI-Assisted Person Help: “Hello – just wanted to say welcome to the team (I’m using ChatGPT to write this message)! It’s fantastic to have you with us, and we’re excited to get to work with you. We’re here to support you as we’ve all been in your shoes. Welcome aboard and stoked to have you with us! (p.s., ChatGPT is an artificial intelligence tool on the internet that helps write and edit things).”
AI-Assisted Task Help: “Turto’s family wanted an update/report, including details about his health and recent activity. Normally this report is something you would write during your shift, but I went ahead and took care of it by using ChatGPT. All set – you don’t need to worry about this task. (p.s., ChatGPT is an artificial intelligence tool on the internet that helps write and edit things).”
No AI Task Help: “Turto’s family wanted an update/report, including details about his health and recent activity. Normally this report is something you would write during your shift, but I went ahead and took care of it. All set – you don’t need to worry about this task.”
Then, participants were asked to rate their perceptions of Caregiver J’s warmth (Fiske et al., 2002) as well as their felt obligations toward their caregiving team (Eisenberger et al., 2001; Lee et al., 2023; Lorinkova & Perry, 2019). Subsequently, participants started their caregiving shift. After 15 s into the shift, all participants received a message from Caregiver J asking for help—their response to this message was our behavioral measure of enacted help (see Christian et al., 2012 for a similar behavioral measure). In total, the shift was 7 min long, and participants were paid $12.00 per hour.

Measures

We used validated surveys from established literature to measure key constructs. Unless otherwise noted, response scales ranged from 1 = strongly disagree to 5 = strongly agree.

Warmth

Following the stem, “Caregiver J seems…,” participants responded to items from Fiske et al. (2002) to assess warmth (warm, kind, good-natured; \(\alpha\) = 0.95).

Competence

Following the stem, “Caregiver J seems…,” participants responded to items from Fiske et al. (2002) to assess competence (competent, capable, efficient; \(\alpha\) = 0.92).

Felt Obligations

Following Lorinkova and Perry (2019) and Lee et al. (2023), we measured felt obligations with four items from Eisenberger et al. (2001) by modifying the stems to refer to the caregiving context. Example items include “I feel a sense of obligation to participate in all aspects of what my caregiving team needs” and “I feel an obligation to do whatever I can to help my caregiving team to achieve its goals”; \(\alpha\) = 0.92.

Enacted Help

During their shift, participants received the following message from Caregiver J, “Hey! Could you help me out? I also work with the elderly (in addition to Turto) and am updating a cognition test, which you can view in [this questionnaire PDF]. Could you double check that it contains exactly 20 questions, each with one unique letter label (e.g., ‘Question A,’ ‘Question B,’ etc.)? You can use the send messages tab to let me know. Each question should also have exactly 8 response options.”
In other words, participants were asked to confirm whether the PDF contained 20 questions following the labeling scheme “Question A,” “Question B,” “Question C,” etc., each with 8 response options. As noted, participants were told during their training that they were not required to respond to messages during their shift (they would still receive full payment without doing so). The PDF contained several issues: it had 20 questions, but 2 consecutive items were both mistakenly labeled “Question I.” Moreover, Question Q only had seven (rather than eight) response options. The extent to which participants responded to this request via their send messages tab represented our behavioral measure of enacted help. Similar overt behavior measures have been used in the past to successfully measure helping (e.g., Christian et al., 2012). We created a 0–4 grading scheme/rubric for enacted help.1 Participants received a score of 0 if they did not respond or refused to help; 1 if they responded only with a broad, general confirmation such as “it’s good”; 2 if they caught 1 of the issues (2 Question I’s or only 7 response options for Question Q); 3 if they caught both issues; and 4 if they identified all issues and confirmed that the test contained 20 questions. Prototype examples from actual participants include (0) “Sorry, no,” (1) “Looks good,” (2) “Question Q is missing a response option,” (3) “There are two question I’s, question Q has 7 answers,” and (4) “There are twenty questions, but two are labeled as question I. Also question Q only has seven choices.” Using this rubric, three authors scored responses independently and then compared results. Scoring was reliable, as raters agreed on 99% of responses. For the few disagreements that did occur, the authors discussed their views before deciding on a final score.

Controls

We controlled for age and gender (0 = female, 1 = male) due to possible differences in attitudes toward technology (Cai et al., 2017). We also controlled for task performance, or participants’ efficacy in carrying out their caregiving tasks. This score was operationalized as the total time between problematic symptoms arising in Turto, and the moment ailments were cured via actions such as feeding or providing medicine. High scores indicated low performance—when participants failed to address Turto’s symptoms or were delayed in responding appropriately, scores on this variable increased. For robustness, the results reported below include controls. The results do not change significantly when controls are removed.

Attention and Manipulation Checks

As an attention check, participants were provided with a color at the end of their shift (e.g., “blue”) and then asked to insert this color into a subsequent question on their post survey. Seven participants failed this attention check. As a manipulation check, participants were asked whether Caregiver J used ChatGPT when first messaging them (Yes/No). One hundred percent of the retained 371 participants answered this question correctly.

Results

Confirmatory Factor Analyses

To assess the factor structures of warmth, competence, and felt obligations, we conducted a series of confirmatory factor analyses. A 3-factor solution with warmth, competence, and felt obligations as 3 distinct factors demonstrated good fit (χ2 = 128.05, df = 32, comparative fit index (CFI) = 0.97, Tucker–Lewis index (TLI) = 0.96, root mean square error of approximation (RMSEA) = 0.09, standardized root mean square residual (SRMR) = 0.04), which was significantly better than 2-factor structures combining warmth and competence (χ2 = 516.11, df = 34, CFI = 0.87, TLI = 0.83, RMSEA = 0.20, SRMR = 0.07; (Δχ2 = 388.06, p = 0.00) or warmth and felt obligations (χ2 = 1245.54, df = 34, CFI = 0.67, TLI = 0.56, RMSEA = 0.31, SRMR = 0.19; Δχ2 = 1117.50, p = 0.00) into a single dimension.

Descriptive Statistics and Conditional Means

We present means and standard deviations of study variables in Table 1. Consistent with our theorizing, warmth was significantly associated with felt obligation (r = 0.42, p = 0.00) and felt obligation was significantly associated with enacted help (r = 0.12, p = 0.02). In addition, performance was unrelated to felt obligation (r = 0.02, p = 0.78) or enacted help (r =  − 0.06, p = 0.22). There were no significant differences in performance across the study conditions (F(3, 367) = 0.94, p = 0.42). We present conditional means and analysis of variance (ANOVA) contrasts in Table 2. Consistent with our theorizing, the effect of AI use on warmth was significant (F(1, 367) = 53.37, p = 0.00, \(\eta\) 2 = 0.13), such that participants who received help from a coworker using ChatGPT saw their coworker as significantly less warm (M = 3.75, SD = 0.78) than did those receiving help from a coworker who did not use ChatGPT (M = 4.26, SD = 0.52; t(312.68) =  − 7.25, p < 0.01). In addition, there was no moderating effect of help type on the relationship between AI tool use and warmth. Regarding competence, there was a significant effect of AI use on competence (F(1, 367) = 21.66, p = 0.00, \(\eta\) 2 = 0.06), such that participants who received help from a coworker using ChatGPT saw their coworker as significantly less competent (M = 4.00, SD = 0.70) than did those receiving help from a coworker who did not use ChatGPT (M = 4.31, SD = 0.57; t(347.46) =  − 4.54, p < 0.01). There was also significant effect of help type on competence (F(1, 367) = 15.93, p = 0.00, \(\eta\) 2 = 0.04), such that participants who received person help saw their coworker as significantly less competent (M = 4.03, SD = 0.61) than did those receiving task help (M = 4.29, SD = 0.55; t(369.00) =  − 3.85, p < 0.01).
Table 1
Means, standard deviations, and correlations among study 1 variables
 
M (SD)
1
2
3
4
5
6
7
8
1. Age
37.69 (11.95)
        
2. Sex
0.52 (0.50)
 − 0.18*
       
3. AI use
0.01
 − 0.02
      
4. Help type
 − 0.08
0.00
 − 0.01
     
5. Performance
206.07 (113.47)
0.15*
0.00
 − 0.07
0.02
    
6. Warmth
4.02 (0.71)
0.10
 − 0.11*
 − 0.36**
0.02
0.06
   
7. Competence
4.16 (0.66)
0.10
 − 0.09
 − 0.23**
 − 0.20**
0.03
0.71**
  
8. Felt obligation
4.11 (0.72)
0.08
 − 0.13*
 − 0.08
 − 0.02
0.00
0.42**
0.46**
 
9. Enacted help
0.95 (1.13)
 − 0.18*
0.08
 − 0.06
0.03
 − 0.04
0.09
0.12*
0.12*
Sex is coded as 0 = female and 1 = male. AI use is coded as 0 = did not use AI and 1 = used AI. Help type is coded as 0 = task and 1 = person*p < 0.05; **p < 0.01
Table 2
Condition means and ANOVA contrasts on warmth and competence for study 1
https://static-content.springer.com/image/art%3A10.1007%2Fs10869-025-10068-x/MediaObjects/10869_2025_10068_Tab2_HTML.png

Hypothesis Testing: Main and Indirect Effects via Structural Equation Modeling

To test the full, hypothesized model including main effects, indirect effects, and controls, we specified a simultaneous path analysis using lavaan in R. Table 3 presents the unstandardized coefficient estimates for the proposed model (path model fit: χ2 = 155.57, df = 18, CFI = 1.00, TLI = 1.00, RMSEA = 0.01, SRMR = 0.01). Snijders and Bosker’s (1994) formulas were used to calculate pseudo-R2 for the effect sizes in predicting outcomes. Hypothesis 1 predicted that coworker AI tool use during a helping event would be negatively related to perceptions of coworker warmth. As Table 3 shows, coworker AI tool use was negatively related to coworker warmth (estimate =  − 0.51, p = 0.00), supporting Hypothesis 1. Hypothesis 2 predicted that warmth would be positively related to felt obligations. As Table 3 shows, warmth was positively related to felt obligations (estimate = 0.21, p = 0.00), supporting Hypothesis 2. Hypothesis 3 predicted that coworker AI-assisted help would be indirectly related to felt obligations via perceptions of coworker warmth. The indirect effect was tested via a Monte Carlo simulation procedure using R. This procedure was used to accurately reflect the asymmetric nature of the sampling distribution of an indirect effect (MacKinnon et al., 2004; Preacher et al., 2010). As shown in Table 3, with 10,000 Monte Carlo replications, we found that the indirect effect of coworker AI tool use on felt obligations via warmth was significant (indirect effect =  − 0.23; 95% confidence interval (CI) of [− 0.32, − 0.15]). This finding indicates that coworker AI use was negatively related to felt obligations via warmth perceptions, providing support for Hypothesis 3. Hypothesis 4 predicted that felt obligation would be positively related to enacted help. As Table 3 shows, felt obligation was positively related to enacted help2  (estimate = 0.23, p = 0.01), supporting Hypothesis 4. As shown in Table 3, we also found that the indirect effect of perceived coworker warmth on enacted help via felt obligation was significant (indirect effect = 0.10; 95% CI of [0.03, 0.18]). Finally, the serial mediation (with 10,000 Monte Carlo replications) effect of AI tool use on enacted help via our serial mediators was significant (− 0.023; 95% CI of [− 0.055, − 0.002]). We replicated these results in a sample of undergraduate students while controlling for alternative mediators, reported in Appendix 2 (study 3). We also replicated these results using different approaches to interpreting our dependent variable, described in detail in footnote 1 (Tables 4 and 5).3 Regarding our research question and competence, coworker AI tool use was negatively related to coworker competence (estimate =  − 0.31, p = 0.00), the indirect effect of coworker AI tool use on felt obligations via competence was significant (indirect effect =  − 0.16, 95% CI of [− 0.23, − 0.09]), and the indirect effect of perceived coworker competence on enacted help via felt obligation was significant (indirect effect = 0.11; 95% CI of [0.03, 0.20]).
Table 3
Simultaneous path-analysis results (study 1)
 
Warmth
Competence
Felt obligation
Enacted help
Age
0.01 (0.01) [0.13]
0.01 (0.01) [0.19]
0.01 (0.01) [0.71]
 − 0.02** (0.01) [0.00]
Sex
 − 0.15* (0.07) [0.04]
 − 0.11 (0.07) [0.10]
 − 0.09 (0.07) [0.18]
0.14 (0.12) [0.23]
Performance
0.00 (0.00) [0.37]
0.00 (0.00) [0.67]
0.00 (0.00) [0.68]
0.00 (0.00) [0.57]
AI use
 − 0.51** (0.07) [0.00]
 − 0.31** (0.07) [0.00]
0.09 (0.07) [0.20]
 
Help type
0.02 (0.07) [0.78]
 − 0.26** (0.07) [0.00]
0.06 (0.07) [0.42]
 
Warmth
  
0.21** (0.05) [0.00]
 
Competence
  
0.36** (0.05) [0.00]
 
Felt obligation
   
0.23* (0.08) [0.01]
Pseudo-\({R}^{2}\)
0.18
0.13
0.23
0.05
Indirect effects
    
AI use via warmth (coefficient, 95% CI)
  
 − 0.23 [− 0.32, − 0.15]
 
AI use via competence (coefficient, 95% CI)
  
 − 0.16 [− 0.23, − 0.09]
 
Warmth via felt obligation (coefficient, 95% CI)
   
0.10 [0.03, 0.18]
Competence via felt obligation (coefficient, 95% CI)
   
0.11 [0.03, 0.20]
AI use serial mediation (coefficient, 95% CI)
   
 − 0.02 [− 0.06, − 0.02]
Values in parentheses are standard errors; values in brackets are p values; entries are unstandardized coefficients. AI use is coded as 0 = did not use AI and 1 = used AI. Help type coded as 0 = person and 1 = task
*p < 0.05; **p < 0.01
Table 4
Replacing enacted help with enacted help attempt in path analysis (study 1)
 
Warmth
Competence
Felt obligation
Enacted help (attempt)
Age
0.01 (0.01) [0.08]
0.01 (0.01) [0.15]
0.01 (0.01) [0.71]
 − 0.02** (0.01) [0.00]
Sex
 − 0.15* (0.07) [0.04]
 − 0.11 (0.07) [0.11]
 − 0.09 (0.07) [0.18]
0.24 (0.14) [0.07]
Performance
0.00 (0.00) [0.37]
0.00 (0.00) [0.67]
0.00 (0.00) [0.67]
0.00 (0.00) [0.74]
AI use
 − 0.51** (0.07) [0.00]
 − 0.31** (0.07) [0.00]
0.09 (0.07) [0.20]
 
Help type
0.02 (0.07) [0.78]
 − 0.26** (0.07) [0.00]
0.06 (0.07) [0.42]
 
Warmth
  
0.21** (0.05) [0.00]
 
Competence
  
0.36** (0.05) [0.00]
 
Felt obligation
   
0.38** (0.10) [0.00]
Indirect effects
AI use via warmth (coefficient, 95% CI)
  
 − 0.23 [− 0.32, − 0.15]
 
AI use via competence (coefficient, 95% CI)
  
 − 0.16 [− 0.23, − 0.09]
 
Warmth via felt obligation (coefficient, 95% CI)
   
0.05 [0.02, 0.09]
Competence via felt obligation (coefficient, 95% CI)
   
0.06 [0.02, 0.10]
Values in parentheses are standard errors; values in brackets are p values; entries are unstandardized coefficients. AI use is coded as 0 = did not use AI and 1 = used AI. Help type is coded as 0 = person and 1 = task
*p < 0.05; **p < 0.01
Table 5
Replacing enacted help with OCB intentions survey items in path analysis (study 1)
 
Warmth
Competence
Felt obligation
Help intentions
Age
0.01 (0.01) [0.13]
0.01 (0.01) [0.15]
0.01 (0.01) [0.71]
0.01 (0.01) [0.76]
Sex
 − 0.15* (0.07) [0.04]
 − 0.11 (0.07) [0.11]
 − 0.09 (0.07) [0.18]
 − 0.09 (0.06) [0.17]
Performance
0.00 (0.00) [0.37]
0.00 (0.00) [0.67]
0.00 (0.00) [0.67]
0.00 (0.00) [0.47]
AI use
 − 0.51** (0.07) [0.00]
 − 0.31** (0.07) [0.00]
0.09 (0.07) [0.20]
 
Help type
0.02 (0.07) [0.78]
 − 0.26** (0.07) [0.00]
0.06 (0.07) [0.42]
 
Warmth
  
0.21** (0.05) [0.00]
 
Competence
  
0.36** (0.05) [0.00]
 
Felt obligation
   
0.36** (0.05) [0.00]
Indirect effects
AI use via warmth (coefficient, 95% CI)
  
 − 0.23 [− 0.32, − 0.15]
 
AI use via competence (coefficient, 95% CI)
  
 − 0.16 [− 0.23, − 0.09]
 
Warmth via felt obligation (coefficient, 95% CI)
   
0.16 [0.09, 0.24]
Competence via felt obligation (coefficient, 95% CI)
   
0.18 [0.11, 0.26]
Values in parentheses are standard errors; values in brackets are p values; entries are unstandardized coefficients. AI use is coded as 0 = did not use AI and 1 = used AI. Help type is coded as 0 = person and 1 = task
*p < 0.05; **p < 0.01

Discussion

Study 1 provides initial evidence that using ChatGPT when offering help changes how helpers are perceived and how recipients respond. Specifically, explicitly acknowledging the use of ChatGPT to offer help was seen as less warm and less competent, which in turn reduced recipients’ felt obligations and their willingness to reciprocate. These findings support the idea that there may be relational costs to using ChatGPT during helping exchanges. However, prior research has demonstrated large differences in attitudes toward technology (Cai et al., 2017), which can meaningfully shape how people interpret and respond to AI use. To address this, Study 2 uses a vignette-based design that allows us to conceptually replicate key effects while controlling for individual differences in technology-related attitudes.

Study 2

Sample

We used Prolific to recruit 241 participants in the United States working full or part time with an approval rating of at least 98% and whose primary language was English (as well as no overlap with Study 1). Across participants, 46% were female, 81% White, 6% Asian, 5% Black, 4% Mixed, and 4% Other. The average age of participants was 41.47 years (SD = 12.40), and they were employed in a mixture of jobs including healthcare (14%), information technology (15%), legal services (15%), transportation (11%), education (9%), finance (8%) as well as media, government, security, and cosmetics. Finally, 57% held a bachelor’s degree or higher. We recruited 241 total participants, but 3 failed an attention check, yielding 238 as our final sample.

Procedure

Study 2 was a preregistered, 2 × 2 between-subjects vignette experiment, with AI tool use as the first condition (coworker did or did not use ChatGPT when providing help) and help type as the second condition (coworker provided task versus person-oriented citizenship; preregistration: https://​osf.​io/​x9j6c/​?​view_​only=​28314603be3c465a​976b7771a9ab8472​). Participants were asked to imagine themselves as a member of a caregiving team providing 24-h care to a care recipient named Turto, with caregiving teammates working in shifts throughout the day. All participants were told, “You are arriving at your first shift and taking over for a teammate who just finished their shift – we will refer to this person as ‘your teammate’ when describing more details below.” After reading this background information, participants were randomly assigned a scenario in which we manipulated AI-assisted help and help type. All of the vignettes used language directly from scale items used in prior research (Dierdorff & Rubin, 2022; Settoon & Mossholder, 2002). Participants read one of the following vignettes.

No AI Person Help

Your teammate, the prior caregiver, went the extra mile and helped you, even though you just arrived and have not had the opportunity to offer prior favors or mutual support.
Your teammate, the prior caregiver…
  • Showed courtesy and concern toward you
  • Welcomed you to ensure you feel embraced in the work group
Specifically, the prior caregiver wrote you a message expressing polite encouragement and a welcoming invitation to the team. After composing and editing the message, he/she sent it to you.

AI-Assisted Person Help

Your teammate, the prior caregiver, went the extra mile and helped you, even though you just arrived and have not had the opportunity to offer prior favors or mutual support.
Your teammate, the prior caregiver…
  • Showed courtesy and concern toward you
  • Welcomed you to ensure you feel embraced in the work group
Specifically, the prior caregiver, by using an AI tool, wrote you a message expressing polite encouragement and a welcoming invitation to the team. After using an AI tool to generate and edit the message, he/she sent it to you. An AI tool in this context means a product called ChatGPT: an artificial intelligence tool that helps write and edit things.

AI-Assisted Task Help

Your teammate, the prior caregiver, went the extra mile and helped you, even though you just arrived and have not had the opportunity to offer prior favors or mutual support.
Your teammate, the prior caregiver…
  • Took on extra responsibility to help you
  • Assisted you with a task so you no longer have to do it
Specifically, the prior caregiver, by using an AI tool, wrote a message to Turto’s family members/owners, relaying information about Turto’s health and recent activity. After using an AI tool to generate and edit the message, he/she sent it to them. You no longer need to contact Turto’s family. An AI tool in this context means a product called ChatGPT: an artificial intelligence tool that helps write and edit things.

No AI Task Help

Your teammate, the prior caregiver, went the extra mile and helped you, even though you just arrived and have not had the opportunity to offer prior favors or mutual support.
Your teammate, the prior caregiver…
  • Took on extra responsibility to help you
  • Assisted you with a task so you no longer have to do it
Specifically, the prior caregiver wrote a message to Turto’s family members/owners, relaying information about Turto’s health and recent activity. After composing and editing the message, he/she sent it to them. You no longer need to contact Turto’s family.
After reading their vignette, participants responded to a variety of questions capturing key constructs. Unless otherwise noted, response scales ranged from 1 = strongly disagree to 5 = strongly agree.

Measures

Warmth and Competence

We used items from Fiske et al. (2002) to assess warmth (e.g., warm, kind, good-natured; \(\alpha\) = 0.93) and competence (e.g., competent, capable, efficient; = 0.90) following the stem, “My teammate, the prior caregiver, seems….”

Felt Obligations

Following Lorinkova and Perry (2019) and Lee et al. (2023), we measured felt obligations with four items from Eisenberger et al. (2001) by modifying the stems to refer to a hypothetical caregiving context, e.g., “Regarding my caregiving team in the scenario just described…,” “I would feel a sense of obligation to participate in all aspects of what my caregiving team needs,” and “I would feel an obligation to do whatever I can to help my caregiving team to achieve its goals”; \(\alpha\) = 0.87.

Technological Controls

Technological Self-Efficacy

Technology self-efficacy was assessed with four items from Saville and Foster (2021), e.g., “I feel confident in my ability to use social media to have meaningful interactions,” and “I feel confident in my ability to use technology for entertainment.” The reliability was \(\alpha\) = 0.74.

Technology Readiness Index

We used items from the updated technology readiness index, which measures “people’s propensity to embrace and use new technologies for accomplishing goals in home life and at work” (Parasuraman & Colby, 2015; p. 4), to assess four dimensions: optimism—a positive view of technology and a belief that it offers people increased control, flexibility, and efficiency in their lives (e.g., “New technologies contribute to a better quality of life”), innovativeness—a tendency to be a technology pioneer (e.g., “Other people come to me for advice on new technologies”), discomfort—a perceived lack of control over technology and a feeling of being overwhelmed by it (e.g., “Sometimes, I think that technology systems are not designed for use by ordinary people”), and insecurity—distrust of technology, stemming from skepticism about its ability to work properly (e.g., “Technology lowers the quality of relationships by reducing personal interaction”). Each dimension was assessed with four items from the updated TRI 2.0, with reliabilities for optimism, innovativeness, discomfort, and insecurity of, respectively, \(\alpha\) = 0.84, \(\alpha\) = 0.87, \(\alpha\) = 0.81, and \(\alpha\) = 0.80.

Perceived Usefulness of ChatGPT

We measured participants’ perceived usefulness of ChatGPT with Lah et al.’s (2020) perceived usability scale (e.g., “Using ChatGPT in my job would enable me to accomplish more tasks quickly,” “Using ChatGPT would improve my job performance”; \(\alpha\) = 0.98).

Additional Controls

Similar to study 1, we controlled for age and gender (0 = female, 1 = male) due to possible differences in attitudes toward technology (Cai et al., 2017). For robustness, the results reported below include these additional controls.

Attention and Manipulation Check

As an attention check, participants were asked to select “Always true” as a response option to a single item within the survey. Three participants answered this item incorrectly. Participants responded to the single item, “Did your teammate, the prior caregiver, use ChatGPT?” (Yes, No/I could not tell) as a manipulation check. One hundred percent of the retained participants answered this item correctly.

Confirmatory Factor Analyses

To assess the factor structures of warmth, competence, felt obligations, technological self-efficacy, technology readiness, and perceived usefulness of ChatGPT, we conducted a series of confirmatory factor analyses. A 9-factor model demonstrated better fit (χ2 = 883.72, df = 524, CFI = 0.94, TLI = 0.93, RMSEA = 0.05, SRMR = 0.06) than a 7-factor model which combined warmth and competence into a single factor (χ2 = 1110.49, df = 532, CFI = 0.90, TLI = 0.89, RMSEA = 0.07, SRMR = 0.07; Δχ2 = 226.77, p = 0.00).

Descriptive Statistics and Conditional Means

We present means and standard deviations of study variables in Table 6. Consistent with our theorizing, warmth was significantly associated with felt obligation (r = 0.50, p = 0.00). In addition, warmth was associated with a number of technology controls, including technological self-efficacy (r = 0.31, p = 0.00), tech readiness optimism (r = 0.29, p = 0.00), and tech readiness innovative (r = 0.21, p = 0.00). Competence was also positively related to felt obligation (r = 0.44, p = 0.00). We present conditional means and ANOVA contrasts in Table 7. Consistent with our theorizing, the effect of AI tool use on warmth was significant (F(1, 234) = 17.84, p = 0.00, \(\eta\) 2 = 0.07), such that participants who read about received help from a coworker using ChatGPT saw their coworker as significantly less warm (M = 4.20, SD = 0.61) than did those receiving help from a coworker who did not use ChatGPT (M = 4.52, SD = 0.60; t(235.99) =  − 4.13, p < 0.01). Regarding competence, there was a significant effect of AI use on competence (F(1, 234) = 4.11, p = 0.04, \(\eta\) 2 = 0.02), such that participants who read about receiving help from a coworker using ChatGPT saw their coworker as significantly less competent (M = 4.4.31, SD = 0.57) than did those receiving help from a coworker who did not use ChatGPT (M = 4.46, SD = 0.62; t(233.37) =  − 1.98, p = 0.049). There was also significant effect of help type on competence (F(1, 234) = 12.00, p = 0.00, \(\eta\) 2 = 0.05), such that participants who read about receiving person help saw their coworker as significantly less competent (M = 4.24, SD = 0.61) than did those receiving task help (M = 4.51, SD = 0.55; t(230.8) =  − 3.44, p < 0.01).
Table 6
Means, standard deviations, and correlations among study 2 variables
 
M (SD)
1
2
3
4
5
6
7
8
9
10
11
12
13
1. Age
41.47 (12.40)
            
2. Sex
0.54 (0.50)
 − 0.12
           
3. AI use
 − 0.06
0.08
          
4. Help type
 − 0.07
0.02
 − 0.01
         
5. Tech self-efficacy
4.16 (0.58)
 − 0.08
 − 0.11
0.17*
0.05
(0.74)
        
6. Perceived usefulness of ChatGPT
3.12 (1.14)
 − 0.12
0.07
0.17*
0.06
0.16*
(0.98)
       
7. Tech ready (optimism)
3.92 (0.64)
 − 0.04
0.03
0.21*
0.11
0.46**
0.44**
(0.84)
      
8. Tech ready (innovative)
3.48 (0.90)
 − 0.06
0.25*
0.10
 − 0.02
0.39*
0.33*
0.46**
(0.87)
     
9. Tech ready (discomfort)
2.27 (0.82)
 − 0.01
0.09
0.00
 − 0.03
 − 0.32*
 − 0.05
 − 0.31*
 − 0.16*
(0.81)
    
10. Tech ready (insecurity)
3.00 (0.92)
 − 0.01
0.01
0.02
 − 0.15*
 − 0.20*
 − 0.20*
 − 0.46**
 − 0.21*
0.33*
(0.80)
   
11. Warmth
4.36 (0.62)
0.04
 − 0.07
 − 0.26*
0.06
0.31*
0.04
0.29*
0.21*
 − 0.18*
 − 0.09
(0.93)
  
12. Competence
4.38 (0.60)
0.08
 − 0.03
 − 0.13*
 − 0.22**
0.28*
 − 0.00
0.29*
0.18*
 − 0.12
 − 0.13
0.64**
(0.90)
 
13. Felt obligation
4.08 (0.66)
0.20*
 − 0.04
 − 0.16*
 − 0.02
0.30*
0.05
0.30*
0.24*
 − 0.15*
 − 0.11
0.50**
0.44*
(0.87)
Sex is coded as 0 = female and 1 = male. AI use is coded 0 = did not use AI and 1 = used AI. Help type is coded 0 = task and 1 = person
*p < 0.05; **p < 0.01
The values on the diagonal (italics) indicate scale reliabilities
Table 7
Condition means and ANOVA contrasts on warmth and competence for study 2
https://static-content.springer.com/image/art%3A10.1007%2Fs10869-025-10068-x/MediaObjects/10869_2025_10068_Tab7_HTML.png
*p < 0.05; **p < 0.01

Hypothesis Testing: Main and Indirect Effectsvia Structural Equation Modeling

To test the full, hypothesized model including main effects, indirect effects, and controls, we specified a simultaneous path analysis using lavaan in R. Table 8 presents the unstandardized coefficient estimates for the proposed model (path model fit: χ2 = 159.25, df = 21, CFI = 0.92, TLI = 0.71, RMSEA = 0.09, SRMR = 0.03). Snijders and Bosker’s (1994) formulas were used to calculate pseudo-R2 for the effect sizes in predicting outcomes. Hypothesis 1 predicted that coworker AI tool use during a helping event would be negatively related to perceptions of coworker warmth. As Table 8 shows, coworker AI-assisted help was negatively related to coworker warmth (estimate =  − 0.39, p = 0.00), supporting Hypothesis 1. Hypothesis 2 predicted that warmth would be positively related to felt obligations. As Table 8 shows, warmth was positively related to felt obligations (estimate = 0.38, p = 0.00), supporting Hypothesis 2. Hypothesis 3 predicted that coworker AI tool use would be indirectly related to felt obligations via perceptions of coworker warmth. The indirect effect was tested via a Monte Carlo simulation procedure using R. This procedure was used to accurately reflect the asymmetric nature of the sampling distribution of an indirect effect (MacKinnon et al., 2004; Preacher et al., 2010). As shown in Table 8, with 10,000 Monte Carlo replications, we found that the indirect effect of coworker AI tool use on felt obligations via warmth was significant (indirect effect =  − 0.20; 95% CI of [− 0.30, − 0.11]). Regarding our research question and competence, coworker AI tool use was negatively related to coworker competence (estimate =  − 0.20, p = 0.01), competence was positively related to felt obligations (estimate = 0.21, p = 0.00), and the indirect effect of coworker AI use on felt obligations via competence was significant (indirect effect =  − 0.10, 95% CI of [− 0.18, − 0.03]).
Table 8
Simultaneous path-analysis results (study 2)
 
Warmth
Competence
Felt obligation
Age
0.00 (0.00) [0.57]
0.01 (0.01) [0.23]
0.01** (0.00) [0.00]
Sex
 − 0.04 (0.07) [0.59]
0.00 (0.07) [0.89]
0.01 (0.07) [0.87]
Technological self-efficacy
0.15* (0.07) [0.05]
0.16* (0.07) [0.02]
 
Perceived usefulness of ChatGPT
 − 0.04 (0.03) [0.21]
 − 0.07* (0.03) [0.04]
 
Tech ready (optimism)
0.32** (0.08) [0.00]
0.32** (0.07) [0.00]
 
Tech ready (innovative)
0.06 (0.05) [0.21]
0.02 (0.05) [0.73]
 
Tech ready (discomfort)
 − 0.05 (0.05) [0.34]
0.03 (0.04) [0.56]
 
Tech ready (insecurity)
0.08 (0.05) [0.09]
 − 0.03 (0.04) [0.94]
 
AI use
 − 0.39** (0.07) [0.00]
 − 0.20* (0.07) [0.01]
 − 0.05 (0.08) [0.52]
Help type
0.05 (0.07) [0.49]
 − 0.29** (0.07) [0.00]
0.01 (0.08) [0.76]
Warmth
  
0.38** (0.06) [0.00]
Competence
  
0.21** (0.06) [0.00]
Pseudo-\({R}^{2}\)
0.35
0.36
0.29
Indirect effects
AI use via warmth (coefficient, 95% CI)
  
 − 0.20 [− 0.30, − 0.11]
AI use via competence (coefficient, 95% CI)
  
 − 0.10 [− 0.18, − 0.03]
Values in parentheses are standard errors; values in brackets are p values; entries are unstandardized coefficients. AI use is coded as 0 = did not use AI and 1 = used AI. Help type is coded as 0 = person and 1 = task
*p < 0.05; **p < 0.01

Discussion

Study 2 conceptually replicated the main effects from Study 1 using a vignette design and extended them by accounting for individual differences in technology-related attitudes. Even after controlling for technological self-efficacy, technology readiness, and perceived usefulness of AI, AI-assisted help was still perceived as less warm and competent, leading to reduced felt obligations. These results strengthen the conclusion that AI use in helping contexts can dampen the relational signals that typically drive reciprocity, even among those generally favorable toward technology.

General Discussion

Helping directed at coworkers—a type of organizational citizenship behavior—has important consequences for individuals, teams, and collectives. Although employees who receive help from coworkers sometimes reciprocate because they experience a sense of felt obligation, recent research suggests that this phenomenon is not fully consistent with the traditional assumption that received help leads to enacted help. Helping is now viewed as a more nuanced process, with studies showing when and why receiving help backfires, or when recipients avoid helping those who helped them (Thompson & Bolino, 2018). For example, due to a variety of factors, including the type of received help (Harari et al., 2022; Lee et al., 2023) or the status of the helper (Lennard & Van Dyne, 2018; Nadler & Chernyak-Hai, 2014; Tai et al., 2023), recipients may avoid paying back received help because they feel inadequate or threatened. Although illuminating, help in these studies is assumed to function through a coworker’s efforts alone. With the growing popularity of AI tools such as ChatGPT, it is important for research to address cases where employees augment their work with AI, not only because these tools are becoming increasingly more sophisticated but because delegating one’s actions to an artificial system can dilute the relational perceptions necessary for well-functioning interpersonal actions. Here, we proposed that the use of AI tools in providing help, specifically LLMs like ChatGPT, might lead to reduced perceptions of warmth and a subsequent decrease in the likelihood of help being reciprocated. Across several studies, we found support for our hypotheses. These results have both theoretical and practical implications.

Theoretical Implications

Our results contribute to the current research in several ways. First, our findings extend theories of technology-mediated relationship building and maintenance (Rusbult et al., 2012; Walther, 2011). In the context of technology-mediated relationships, digital tools can be used to enhance interpersonal communication efficiency, enabling individuals to participate in innovative and lightweight relationship activities (Maruping & Agarwal, 2004). In their review, Tong and Walther (2011) acknowledged the positive impact of technology but also raised concerns about potential drawbacks. Citing equity theory (Hatfield et al., 1978), they suggested that recipients might perceive low-cost actions as less genuine in a relational context and called for future research to examine the effects of technology “from a relational transaction costs perspective” (Tong & Walther, 2011, p. 112). Our study explored the consequences of receiving help from an AI-supported coworker. In our experiment, we varied whether coworkers used ChatGPT to complete their helping behaviors while keeping the message properties (near) constant. Despite recipients observing the same actions from Coworker J, the variation in the processes behind those actions—whether involving ChatGPT or not—influenced how recipients responded. This resulted in participants perceiving less warmth when coworkers used ChatGPT during their actions. Furthermore, diminished warmth perceptions prompted participants to feel a reduced sense of obligation and were less likely to reciprocate help. Therefore, while AI tools may enhance individual efficiency, they can also incur relational costs and undermine the social glue that managers and practitioners desire from citizenship behaviors.
Second, we extend citizenship theory, much of which assumes that when employees go out of their way to help, they are making a discretionary investment of time, energy, or expertise—and that this investment triggers social and psychological mechanisms like gratitude, obligation, or reciprocity. However, as new technologies enable people to offload parts of their work, this assumption no longer holds. Our findings suggest that the relational effects of helping depend not just on what someone does, but how they do it. Perceiving a helping behavior as supported by ChatGPT is viewed as less warm and reduces felt obligations to reciprocate. We therefore expand citizenship theory to account for the growing role of automation and AI in how helping behaviors are interpreted and responded to.
Of course, other workplace tools distinct from AI are available too. What we view as particularly unique about AI or ChatGPT is the degree to which people perceive it to take over the task entirely. Research shows that when people are told a product was created “with AI assistance,” they often assume the AI generated it almost entirely, with little human involvement (Altay & Gilardi, 2024). This perception of full delegation makes AI qualitatively different from tools like templates, calculators, spell-checks, or search engines, which are seen as supporting human investment/effort rather than replacing it. In helping contexts, this matters: recipients may infer that the helper exerted minimal investment, leading to reduced feelings of warmth, obligation, and willingness to reciprocate. Using ChatGPT and related generative AIs may reshape how help is socially evaluated.
Third, we extend social perception theory by considering warmth as one mediator between received help and felt obligations. This discovery illuminates one possible pathway explaining why AI-assisted help may lead to diminished enacted help and felt obligations in the realm of coworker interactions. Although prior research has acknowledged the potential impact of AI on social dynamics, our theory describes one specific mechanism by which this can occur. By pinpointing warmth as a mediator between AI-assisted help and subsequent behavioral outcomes, we provide an understanding of one psychological process at play. Although we focused on warmth, there may be other mechanisms that explain why received help, when augmented with AI, backfires. Additional constructs may include effort (Wei et al., 2023), time spent (Rusbult, 1980), or authenticity (Brodsky, 2021). For example, Liu et al. (2023) asked participants to imagine exchanging text messages with a hypothetical friend, Taylor, regarding recent interactions (e.g., speaking about an upcoming birthday, needing advice after having a conflict with a colleague). Compared to those who believed Taylor sent messages individually, those who learned that the messages were aided by an AI system perceived significantly lower effort and relationship satisfaction in the hypothetical interaction. Thus, there may be additional mediators in addition to warmth that could operate as psychological perceptions relating to future felt obligations. Or, it may be that these constructs are upstream mediators from received help to warmth. In other words, perhaps individuals form perceptions of presumed time and effort taken, and then these perceptions fuel an overall assessment of warmth, leading to additional downstream consequences. These ideas may be interesting domains for future studies.

Practical Implications

There is an important and ongoing societal conversation about the fourth industrial revolution and the rapid incorporation of AI into the twenty-first century workplace. The conversation often centers on how this technology can enhance employee productivity by helping them to understand trends, improve communication, and improve data analytics. However, it is vital to acknowledge that the implementation of new technology can bring about real costs that are sometimes overshadowed by these promises. Our study introduces a significant cautionary note, highlighting that while technology and AI systems may reduce execution costs, they could potentially elevate relational costs. In other words, although these systems streamline tasks, they might impact interpersonal relationships, potentially influencing perceptions of warmth and, consequently, feelings of obligation and enacted help. Practically, organizations should be mindful of the social implications when incorporating AI into the workplace. The findings suggest that coworkers using AI for assistance may be perceived as less warm, which, in turn, could affect feelings of obligation and subsequent collaborative behaviors. This insight encourages a thoughtful approach to the integration of AI, considering not only efficiency gains but also the potential impact on workplace relationships and collaborative efforts.
Given the speed at which AI tools, LLMs, and algorithms are being integrated into organizations across the globe (Bamberger, 2018; Makridakis, 2017; Von Krogh, 2018), we urge managers to be aware of the potential downside of over-relying on AI tools when employees enact helping behaviors. This insight may be particularly useful for organizational design efforts, whereby managers and organizations may find it necessary to think deeply about the extent to which LLMs creep into typically human-centered social interactions. Just as new technologies bring with them digital undertows at the industry level (Scott & Orlikowski, 2022), this same phenomenon may operate at the individual level as well. The crux of why citizenship behaviors are important in applied contexts is because they, in the aggregate over time and people, raise the overall effectiveness of a collective. However, if one individual realizes efficiency gains by using an AI tool but then later incurs costs from failed reciprocity, the collective net outcome is potentially zero. In addition to the growing literature on when received help backfires, this study urges leaders to consider helping processes in their entirety across individuals.
An important caveat to our findings is that the broader relational costs and benefits of AI-assisted help at the collective level remain an open question for future research. One could argue that the sender may in fact gain from using AI because it frees up time to assist with other tasks or support additional coworkers. However, our studies point the fact that, although AI may increase helping availability initially, it can disrupt the give-and-take of interpersonal exchange relationships. For instance, in real organizational settings, imagine a team member who uses AI to summarize client feedback, saving a colleague from doing the task manually. The colleague may appreciate the help in the moment but—per our findings—may be less likely to reciprocate later (e.g., by helping with a different task under deadline). If that second task then goes unfinished, the team’s collective performance could suffer, even though the initial AI-assisted help was well intended and efficient. Moreover, the recipient may now see the sender as less warm (relative to someone who provided help without AI). So, while the sender may “gain” from using AI (by freeing up time for other tasks), the recipient perceives the sender as less warm and becomes less likely to reciprocate, and the subsequent task that required support may go unaddressed. From a dyadic perspective, this creates a potential relational cost: the efficiency gained in the initial helping act is offset by a breakdown in reciprocity. On the other hand, if we zoom out even further and consider the total number of people the sender might be able to assist due to AI-enabled efficiency, the broader organizational impact could look quite different. This tension between efficiency and reciprocity is an important area for continued investigation.

Limitations

Our findings should be considered in light of several limitations. The distinction between task-focused and person-oriented helping behaviors was based on Settoon and Mossholder’s (2002) traditional classification, which has been a fundamental distinction applied to helping research since its inception (Ehrhart, 2018). However, in the last several years, a number of additional dimensions of help type have emerged in the literature, including anticipatory versus reactive helping (Harari et al., 2022), autonomy versus dependency helping (Chernyak-Hai et al., 2024), and empowering versus non-empowering helping (Lee et al., 2023). Anticipatory help involves providing assistance in advance of being asked. Autonomy helping occurs when the helper provides the recipient with information and resources that will allow him/her to resolve the issue in the future without help. Empowering help occurs when the recipient is an interactive participant in the resolution of an issue and the event is viewed as a mastery experience. It may be beneficial for future research to extend our work by considering the extent to which AI-assisted help across these various help types influences recipient perceptions and attributions.
We also held the qualities of Caregiver J constant across conditions. This individual, in our experimental scenario, not only provided help but also asked for assistance from participants. Prior research has shown that several individual differences, including status (Burke et al., 1976; Tai et al., 2023; Toegel et al., 2013), gender (Akinola et al., 2018; Heilman, 2012), personality (Hughes et al., 2022), and minority identity (Nguyen & Ryan, 2008) have important implications for help receiving and giving dynamics. For example, Tai et al. (2023) found that recipients felt more threatened when receiving help from individuals perceived to be of greater status. We recommend exploring in future research how these characteristics of the help giver (and asker) may influence the recipient’s cognitive, emotional, and behavioral reactions.
Moreover, we assessed warmth, competence, and felt obligations concurrently throughout our studies, which raises potential concerns for common method variance (CMV) bias (Podsakoff et al., 2012). This simultaneous measurement makes it difficult to discern the true separation between warmth and obligations to reciprocate. Similarly, our research provides a snapshot of the immediate reactions to AI-assisted help. However, it is crucial to understand how these perceptions and behaviors evolve over time. For instance, repeated exposure to AI-assisted help may lead to changes in perceptions of warmth and felt obligations as individuals become more accustomed to this type of assistance. Longitudinal studies would provide valuable insights into how AI-assisted help influences social dynamics in organizations over the long term. Last, in our studies, AI-assisted help was explicitly disclosed to the recipients. However, the dynamics of revealing or concealing AI involvement in helping behaviors is a complex issue that warrants further investigation. The identity management literature suggests that how individuals manage information about their identity and actions—for example, choosing to reveal, conceal, or signal characteristics about oneself (Clair et al., 2005)—influences interpersonal cognitions and behaviors (Button, 2001; Jones et al., 2016). Future research could explore the strategic choices behind these disclosures and their implications. Questions, such as how third parties discover AI delegation, whether individuals are transparent about their reliance on AI, and the extent to which organizational culture supports open use of AI tools, will be critical to understand moving forward.
Our studies focused on situations in which AI use is explicitly acknowledged by the helper—where the recipient clearly knows that a tool like ChatGPT was used to generate the assistance. We modeled this after instances demonstrated in professional and public communication, where individuals voluntarily disclose AI involvement. For example, emails may include statements such as “Drafted with ChatGPT,” academic presenters sometimes begin talks with “Slides generated using AI,” and social media platforms like TikTok may automatically display “AI-generated” tags on videos. In workplace settings, employees might preface a message with “I used ChatGPT to help write this,” or an executive’s letter to stakeholders may be revealed as AI generated. These kinds of explicit disclosures create a social context in which recipients are aware that AI was involved in the process. While our findings apply directly to such clear and transparent use cases, we recognize that many real-world situations involve more implicit or ambiguous AI use. Future research should explore how undisclosed or passively signaled AI assistance affects social perceptions differently, particularly regarding issues of intentionality and transparency.
In our studies, participants did not directly use the output that Caregiver J generated. As such, our design held the perceived utility of the help relatively constant across conditions, focusing instead on the source of the help (AI-assisted vs. unassisted) and the type of help (task vs. person-oriented). We attempted to capture utility indirectly in the task condition, where Caregiver J removed a responsibility from the participant’s agenda. However, this differs meaningfully from situations in which recipients evaluate or depend on the actual quality of an AI-generated product. It is possible that when help includes a deliverable—like an initial financial model or source code—the effectiveness of that product could influence perceptions of obligation, potentially increasing felt obligation if the AI-generated result is especially useful. Future research should explore this alternative pathway by examining how actual use and perceived utility of AI-assisted outputs interact with perceptions of effort and warmth in shaping social exchange outcomes.

Conclusion

In conclusion, our research makes it apparent that AI tools like ChatGPT can inadvertently cool the warmth that fuels the reciprocal exchanges foundational to workplace citizenship behaviors. The deployment of AI in acts of workplace assistance, while on the one hand may streamline tasks, also runs the risk of diminishing the personal touch that fosters a sense of obligation and the desire to help in return. This interplay between AI’s practicality and the subtle psychological undercurrents it may disrupt calls for a thoughtful calibration of how we integrate these tools into collaborative spaces. Moreover, the introduction of warmth as a pivotal mediator emphasizes the need to consider the psychological effects of AI in collaborative environments. As technological integration becomes more prevalent, our work suggests the importance of a strategic approach to implementing AI—one that preserves the fundamental human elements of warmth and mutual support that are vital to the thriving of organizations.

Declarations

Competing interests

The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
download
DOWNLOAD
print
DRUCKEN
Titel
Machines in the Middle: Using Artificial Intelligence (AI) While Offering Help Affects Warmth, Felt Obligations, and Reciprocity
Verfasst von
Christopher R. Dishop
Allen S. Brown
Ping-Ya Chao
Andrew Kuznetsov
Anita Williams Woolley
Publikationsdatum
31.10.2025
Verlag
Springer US
Erschienen in
Journal of Business and Psychology
Print ISSN: 0889-3268
Elektronische ISSN: 1573-353X
DOI
https://doi.org/10.1007/s10869-025-10068-x

Appendix 1.

Description of experimental task for studies 1 and 3.
Participants were asked to work as informal caregivers, taking part in an online shift as a member of a caregiving team. Informal caregivers provide ongoing care for another individual—e.g., pets, children, family members, friends—sometimes with a chronic condition or disability. Their responsibilities can vary from checking in on care recipients to managing engagements with the healthcare system to assisting with activities of daily living such as bathing, eating, and medication management. Informal caregivers serve an important role in the larger care team, directly performing or influencing work that shapes the recipients’ day-to-day life. Because of the growing importance of this work, several calls for more in-depth research into this domain have emerged in the literature (Gabriel et al., 2022).
We drew from a substantial body of qualitative research on informal caregivers to inspire the design of our experiment (Bainbridge & Broady, 2017; Mueller et al., 2022; Nikkah, 2021; Nikkah et al., 2022; Schurgin et al., 2021; Tang et al., 2018). The online platform used in the experiment incorporated various features that reflected real-life caregiving applications. The first element included the actions participants were asked to perform, which were modeled after activities of daily living that caregivers are frequently involved in carrying out (Chang et al., 2023). In the scenario the experiment was designed around, participants were tasked with overseeing the needs of a care recipient, Turto (displayed on their screens). Throughout the shift, Turto required food and medication at specified times or under certain conditions, which participants had to correctly identify, select from an array of options, and administer. The medications varied in color and type, each designed to address different health issues Turto might encounter, adding complexity to the task. Additionally, Turto was programmed to exhibit dynamic behaviors and physical responses that mimicked real-life scenarios: for example, if the wrong medication was administered or if it was not given in a timely manner, Turto would experience symptoms marked by visible changes such as Turto’s eyes bulging and turning a concerning color. This feedback loop continued until the participant identified the problem and addressed it such as by administering the correct medication or feeding Turto a snack. This simulated the immediate and direct consequences of caregiving decisions in a controlled experimental environment.
The second element was the experimental platform interface, which was deliberately created to emulate systems commonly used by caregivers. This interface included several features intended to simulate the often fragmented and minimalistic information systems available to caregivers. It facilitated online communication, allowing participants to interact with another virtual caregivers, mimicking the collaborative environment in which real caregivers often operate. The platform also provided a repository of information, albeit limited, where participants could search for and retrieve details on how to care for Turto, such as identifying the correct medications or food at critical times.
Finally, the scenario, as operationalized on the platform, included features to capture the social-cognitive environments caregivers often encounter. Participants dealt with the dual nature of caregiving that alternates between routine, mundane tasks, and urgent, complex situations requiring immediate attention and rapid decision-making. The simulation also created opportunities for participants to demonstrate initiative and problem-solving skills, such as when unexpected complications arose with the care recipient’s health, requiring them to identify and administer the correct food or medication. It was also deliberately designed with limited instructions and features, simulating the often minimal and unclear information that caregivers have to work with in practical settings. This aspect of the experiment aimed to highlight the necessity for caregivers to operate effectively under conditions of uncertainty and with incomplete knowledge of the care recipient’s overall health history or the efforts of other team members. Participants were briefed minimally, simulating the real-world scenarios where caregivers often start without comprehensive training or orientation to the specific care environment. They also entered a shift from where the previous caregiver left off, enhancing the continuity and realism of the caregiving experience.
As stated in Colquitt (2008), laboratory studies should aim to capture the psychological essence of the constructs they are examining. In our experiment, this was achieved by designing a platform that simulates online caregiving activities, such as medication management and monitoring, which are central to the caregiving role. The dynamic interactions with the care recipient, Turto, and the immediate feedback through observable changes in his condition based on caregiver actions, are examples of how the experiment mirrors complex real-life caregiving scenarios, thereby enhancing the psychological realism. Moreover, the platform’s design for communication and information retrieval mimics real-life digital tools used by caregivers, providing participants with an authentic environment to navigate (Fig. 2).
Fig. 2
Basic interface for the caregiving task
Bild vergrößern

Simulating Realistic Helping Constraints: Evidence of Fidelity

It is important to establish that the experimental task created a psychologically valid context for our outcome, the decision to reciprocate help. One key goal of our experimental task was to simulate the real-world tension between completing one’s own responsibilities and choosing to help others—an essential feature of many workplace helping decisions. Organizational behavior scholars have emphasized that helping often requires a trade-off, as individuals must allocate limited time and cognitive resources between their core duties and discretionary acts of citizenship (Bergeron et al., 2013; Rapp et al., 2013; Watkins et al., 2022). We tried to embed this tension directly into our task by requiring participants to actively monitor and respond to Turto’s evolving needs, while also presenting them with a voluntary helping opportunity from another caregiver. To ensure that our task successfully elicited this tension, which characterizes helping decisions in organizations, we examined an open-ended question that participants—specifically the undergraduate sample—responded to as the last question before completing the study. For this sample, the last screen that participants saw before ending the study was a single open-ended text box asking, “While you were caring for Turto, Caregiver J sent you a message (i.e., ‘Hey! Could you help me out?’). Did you want to help Caregiver J with his/her request? Why or why not?” We used this in pilot sessions to post hoc gauge participant thinking and/or confusion, but we also left it in as the last question in our full collection with undergraduate students. Responses to the question provide evidence that participants experienced our intended trade-off. For instance, some sample quotes include (original errors in spelling retained throughout):
  • “No, because I didn’t want to miss the signs from Turto to give him a pill, drop a snack, etc.”
  • “I did, but I didn’t really want to at that particular moment while I was caring for Turto, because Turto’s needs seem to be more important.”
  • “I didn’t mind helping them with their request because I wanted to help them and they had slightly helped me before, but was annoyed that it interfered with my caretaking.”
  • “Yes. At first i was a bit overwhelmed with all the things Turto needed. I felt all my attention would have to be on Turto and i couldnt help J, but i calmed down and figured out what needed to be done. As soon as i had a chance i checked his PDF and answered him!”
  • “Yes, but taking care of Turto feels more important and i did not have time to help them.”
  • “No, I did not want to help him as I had an ongoing responsability.”
  • “not really because i had to be watchful of Turto and i doubted my multitasking abilities.”
In this sample, 40% of participants explicitly mentioned the challenge of competing demands in their responses, reinforcing the fidelity of our design. These qualitative responses indicate at least partial support that our experimental task effectively captured the real-world challenge we tried to simulate: individuals must make an effortful decision to go out of their way to help because they are often occupied with their own duties. Many participants explicitly acknowledged this tension in their responses.

Appendix 2.

Study 3

Sample

Participants were 246 undergraduate students from a Mid-Atlantic University. The average age of participants was 21.1 years (SD = 5.9), and 60% were female. All participants passed attention checks. We recruited 248 students, but 2 had software/internet compatibility issues, rendering our final sample 246. Students received course credit through an online SONA system for their participation.

Experimental Setting, Procedure, and Conditions

The experiment was identical to study 1, though we were also able to collect data allowing us to assess alternative mediators, reported below. All other procedures and conditions were the same as study 1. Undergraduates participated in the experiment using their own, personal computers. Unfortunately, due to faulty logic within the randomization of participants to condition cells, we had unbalanced data upon completing our collection: a greater number of participants were assigned to the task help condition relative to the person help condition. Specifically, sample sizes across study conditions were as follows: No AI Person Help = 54, AI-Assisted Person Help = 32, No AI Task Help = 86, AI-Assisted Task Help = 74. To deal with this issue, we used maximum likelihood estimation with robust standard errors during model estimation.

Measures

Warmth, Competence, Felt Obligations, and Enacted Help

We used the same measures as reported in study 1 to assess warmth, competence, felt obligations, and enacted help.

Trust

Because trust has been posited as a key antecedent of citizenship behaviors (Colquitt et al., 2012; Konovsky & Pugh, 1994; Korsgaard et al., 2015; Podsakoff et al., 1990), we used items capturing the degree to which participants were willing to be vulnerable and put themselves at risk (Gillespie, 2011). Following the stem, “I would,” participants responded to three items, including “let Caregiver J have influence over issues that are important to me,” “let Caregiver J have control over critical problems relevant to me,” and “rely on Caregiver J to represent my work accurately to others” (Mayer & Davis, 1999; \(\alpha\) = 0.89).

Perceived Inauthenticity

Perceived inauthenticity is also an important construct during online interpersonal exchanges (Brodsky, 2021), especially when individuals perceive others as faking their emotions (Grandey et al., 2005). Consistent with prior research (Frank et al., 1993; Grandey et al., 2005), participants read the stem, “Caregiver J…” and then responded to the items, “faked how he/she felt,” “was insincere,” and “expressed inauthentic emotions” (\(\alpha\) = 0.93).

Integrity

Finally, integrity, or the extent to which one party believes that another adheres to sound principles, is yet another component that prior research has identified as important in helping exchanges (Dineen et al., 2006). Items from Mayer and Davis (1999) were used to assess integrity. Participants first read the stem, “Caregiver J…” and then responded to “seems to be guided by sound principles,” “has a strong sense of justice,” “tries to be fair when dealing with others/me” (\(\alpha\) = 0.94).

Results

Descriptive Statistics and Main and Indirect Effects via Structural Equation Modeling

Appendix Tables 9, 10, and 11 report zero-order correlations and the structural path coefficients from the hypothesized model (path model with all simultaneous mediators included: χ2 = 112.27, df = 17, CFI = 0.93, TLI = 0.62, RMSEA = 0.09, SRMR = 0.03). Consistent with study 1, both warmth and competence positively correlated with felt obligations, and feltobligations correlated positively with enacted help. Among the additional mediators (trust, inauthenticity, integrity), only inauthenticity correlated (negatively) with felt obligations. Regarding the structural equation model, we controlled for trust, inauthenticity, and integrity by regressing felt obligations on these variables in addition to the focal predictors as well as demographic controls. This approach allowed us to test the effects of warmth and competence while accounting for these three alternatives. As shown, coworker AI use during the helping event was negatively related to coworker warmth (estimate = − 0.17, p = 0.02), supporting Hypothesis 1. In support of Hypothesis 2, warmth was positively related to felt obligations (estimate = 0.46, p = 0.00). Hypothesis 3 predicted that coworker AI use would be indirectly related to felt obligations via perceptions of coworker warmth. This indirect effect was significant (indirect effect = − 0.10; 95% CI of [− 0.20, − 0.01]). Moreover, felt obligation was positively related to enacted help (estimate = 0.21, p = 0.02), supporting Hypothesis 4. As shown in Table 11, we also found that the indirect effect of perceived coworker warmth on enacted help via felt obligation was significant (indirect effect = 0.14; 95% CI of [0.02, 0.26]). Finally, the serial mediation effect of AI tool use on enacted help via our serial mediators was significant (− 0.024; 95% CI of [− 0.064, − 0.002]). Among the alternative mediators, only perceived inauthenticity significantly related to felt obligations (estimate = − 0.12, p = 0.02).
Table 9
Correlations among study 3 variables
 
1
2
3
4
5
6
7
8
9
10
11
1. Age
                   
2. Sex
0.08
                 
3. AI use
 − 0.02
0.03
               
4. Help type
0.00
0.13*
 − 0.09
             
5. Performance
0.10
 − 0.15*
0.00
 − 0.04
           
6. Warmth
 − 0.02
 − 0.02
 − 0.16*
0.17*
 − 0.04
         
7. Competence
 − 0.01
 − 0.04
 − 0.15*
 − 0.15*
 − 0.01
0.51*
       
8. Integrity
0.09
0.06
 − 0.08
 − 0.03
 − 0.06
0.15*
0.07
     
9. Inauthenticity
0.06
 − 0.07
0.12
 − 0.06
0.02
 − 0.15*
 − 0.16*
 − 0.04
   
10. Trust
0.05
0.09
0.04
0.04
0.15*
0.05
0.10
0.10
 − 0.12
 
11. Felt obligation
0.16*
0.12
 − 0.01
 − 0.07
 − 0.00
0.46*
0.41*
0.09
 − 0.19*
0.08
12. Enacted help
 − 0.05
 − 0.02
 − 0.44**
0.03
 − 0.12
0.25**
0.18*
 − 0.07
 − 0.10
 − 0.02
0.13*
Sex is coded as 0 = female and 1 = male. AI use is coded as 0 = did not use AI and 1 = used AI. Help type is coded as 0 = task and 1 = person*p < 0.05; **p < 0.01
Table 10
Condition means and ANOVA contrasts on warmth for study 3
https://static-content.springer.com/image/art%3A10.1007%2Fs10869-025-10068-x/MediaObjects/10869_2025_10068_Tab10_HTML.png
*p < 0.05; **p < 0.01
Table 11
11 Path-analysis results (study 3)
 
Warmth
Competence
Felt obligation
Enacted help
Age
− 0.02 (0.01) [0.73]
− 0.01 (0.01) [0.91]
0.02* (0.01) [0.01]
− 0.02** (0.01) [0.00]
Sex
− 0.05 (0.08) [0.52]
− 0.02 (0.08) [0.85]
0.18* (0.08) [0.02]
0.14 (0.12) [0.23]
Performance
0.00 (0.00) [0.56]
0.00 (0.00) [0.77]
0.00 (0.00) [0.80]
0.00 (0.00) [0.57]
AI use
− 0.17* (0.08) [0.02]
− 0.22* (0.09) [0.01]
0.13 (0.08) [0.11]
 
Help type
0.20* (0.08) [0.01]
− 0.23* (0.09) [0.01]
− 0.20* (0.08) [0.02]
 
Warmth
   
0.46** (0.06) [0.00]
 
Competence
   
0.21** (0.06) [0.00]
 
Integrity
   
− 0.02 (0.07) [0.83]
 
Inauthenticity
   
− 0.12* (0.05) [0.02]
 
Trust
   
0.01 (0.04) [0.91]
 
Felt obligation
     
0.21* (0.11) [0.02]
Indirect effects
  AI use via warmth (coefficient, 95% CI)
   
− 0.10 [− 0.20, − 0.01]
 
  AI use via competence (coefficient, 95% CI)
   
− 0.09 [− 0.17, − 0.02]
 
  Warmth via felt obligation (coefficient, 95% CI)
     
0.14 [0.02, 0.26]
  Competence via felt obligation (coefficient, 95% CI)
     
0.10 [0.02, 0.18]
  AI use via serial mediators (coefficient, 95% CI)
     
− 0.024 [− 0.064, − 0.002]
Regarding our research question and competence, coworker AI use during the helping event was negatively related to coworker competence (estimate = − 0.22, p = 0.02). Competence was positively related to felt obligations (estimate = 0.21, p = 0.00), and the indirect effect of coworker AI-assisted help on felt obligations via competence was significant (indirect effect = − 0.09; 95% CI of [− 0.17, − 0.02]).
1
We also tested models replacing enacted help with (1) enacted help attempt (a more general assessment of help based on participants’ sent messages), (2) enacted help survey items, and (3) a log transformation of enacted help. First, we operationalized enacted help attempt by recoding our behavioral DV (participant sent messages) using a dichotomous scheme: Participants were given a score of 0 if they did not respond to Caregiver J’s request for help (e.g., no response or “Sorry, no”) and a score of 1 if they tried to assist in some way. This dichotomous outcome was incorporated into our structural equation model by using the WLSMV estimator in lavaan. Results are presented in Table 4, and, as shown, all effects/coefficients are consistent with our original model. Second, at the end of the study, participants answered five OCB survey items (Dierdorff & Rubin, 2022) (e.g., “Consider your role as a caregiver. How compelled would you feel to engage in the following things? [1] Lend a compassionate ear when someone on your caregiving team has a work problem; [2] Finish something for a teammate who has to leave early; [3] Help a teammate who has too much to do; [4] Help a less capable teammate; [5] Defend a teammate who is being unfairly "put down”) (1 = not at all likely; 5 = extremely likely; α = 0.90). Results are presented in Table 5, and as shown, the effects are similar to our original model. Finally, because enacted help was right skewed (mean = 0.95 on a 0 to 4 grade), we also assessed a model where enacted help was log transformed to account for its non-normal distribution. After incorporating this log-transformed response variable into our structural equation model, the relationship between felt obligations and (transformed) enacted help has the following values: estimate = 0.12, SE = 0.04, and p = 0.01. In sum, structural paths are consistent across enacted help whether measured via rater scores of enacted help, enacted help attempt (dichotomous), OCB survey items assessed after their caregiving shift, or a log transformation of rater scores.
 
2
We also tested models replacing enacted help with (1) enacted help attempt (a more general assessment of help based on participants’ sent messages), (2) enacted help survey items, and (3) a log transformation of enacted help. First, we operationalized enacted help attempt by recoding our behavioral DV (participant sent messages) using a dichotomous scheme: Participants were given a score of 0 if they did not respond to Caregiver J’s request for help (e.g., no response or “Sorry, no”) and a score of 1 if they tried to assist in some way. This dichotomous outcome was incorporated into our structural equation model by using the WLSMV estimator in lavaan. Results are presented in Table 4, and, as shown, all effects/coefficients are consistent with our original model. Second, at the end of the study, participants answered five OCB survey items (Dierdorff & Rubin, 2022) (e.g., “Consider your role as a caregiver. How compelled would you feel to engage in the following things? [1] Lend a compassionate ear when someone on your caregiving team has a work problem; [2] Finish something for a teammate who has to leave early; [3] Help a teammate who has too much to do; [4] Help a less capable teammate; [5] Defend a teammate who is being unfairly "put down”) (1 = not at all likely; 5 = extremely likely; α = 0.90). Results are presented in Table 5, and as shown, the effects are similar to our original model. Finally, because enacted help was right skewed (mean = 0.95 on a 0 to 4 grade), we also assessed a model where enacted help was log transformed to account for its non-normal distribution. After incorporating this log-transformed response variable into our structural equation model, the relationship between felt obligations and (transformed) enacted help has the following values: estimate = 0.12, SE = 0.04, and p = 0.01. In sum, structural paths are consistent across enacted help whether measured via rater scores of enacted help, enacted help attempt (dichotomous), OCB survey items assessed after their caregiving shift, or a log transformation of rater scores.
 
3
We also tested models replacing enacted help with (1) enacted help attempt (a more general assessment of help based on participants’ sent messages), (2) enacted help survey items, and (3) a log transformation of enacted help. First, we operationalized enacted help attempt by recoding our behavioral DV (participant sent messages) using a dichotomous scheme: Participants were given a score of 0 if they did not respond to Caregiver J’s request for help (e.g., no response or “Sorry, no”) and a score of 1 if they tried to assist in some way. This dichotomous outcome was incorporated into our structural equation model by using the WLSMV estimator in lavaan. Results are presented in Table 4, and, as shown, all effects/coefficients are consistent with our original model. Second, at the end of the study, participants answered five OCB survey items (Dierdorff & Rubin, 2022) (e.g., “Consider your role as a caregiver. How compelled would you feel to engage in the following things? [1] Lend a compassionate ear when someone on your caregiving team has a work problem; [2] Finish something for a teammate who has to leave early; [3] Help a teammate who has too much to do; [4] Help a less capable teammate; [5] Defend a teammate who is being unfairly "put down”) (1 = not at all likely; 5 = extremely likely; α = 0.90). Results are presented in Table 5, and as shown, the effects are similar to our original model. Finally, because enacted help was right skewed (mean = 0.95 on a 0 to 4 grade), we also assessed a model where enacted help was log transformed to account for its non-normal distribution. After incorporating this log-transformed response variable into our structural equation model, the relationship between felt obligations and (transformed) enacted help has the following values: estimate = 0.12, SE = 0.04, and p = 0.01. In sum, structural paths are consistent across enacted help whether measured via rater scores of enacted help, enacted help attempt (dichotomous), OCB survey items assessed after their caregiving shift, or a log transformation of rater scores.
 
Zurück zum Zitat Adams, J. S. (1965). Inequity in social exchange. In L. Berkowitz (Ed.), Advances in experimental and social psychology (Vol. 2, pp. 267–299). Academic Press.
Zurück zum Zitat Ahmad, A., Waseem, M., Liang, P., Fahmideh, M., Aktar, M. S., & Mikkonen, T. (2023, June). Towards human-bot collaborative software architecting with ChatGPT. In Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering (pp. 279–285).
Zurück zum Zitat Akinola, M., Martin, A. E., & Phillips, K. W. (2018). To delegate or not to delegate: Gender differences in affective associations and behavioral responses to delegation. Academy of Management Journal, 61(4), 1467–1491. https://​doi.​org/​10.​5465/​amj.​2016.​0662CrossRef
Zurück zum Zitat Altay, S., & Gilardi, F. (2024). People are skeptical of headlines labeled as AI-generated, even if true or human-made, because they assume full AI automation. PNAS nexus, 3(10), pgae403.
Zurück zum Zitat Anderson, C., & Kilduff, G. J. (2009). Why do dominant personalities attain influence in face-to-face groups? The competence-signaling effects of trait dominance. Journal of Personality and Social Psychology, 96(2), 491.CrossRefPubMed
Zurück zum Zitat Ashford, S. J., Blatt, R., & VandeWalle, D. (2003). Reflections on the looking glass: A review of research on feedback-seeking behavior in organizations. Journal of Management, 29(6), 773–799. https://​doi.​org/​10.​1016/​s0149-2063_​03_​00079-5CrossRef
Zurück zum Zitat Bainbridge, H. T., & Broady, T. R. (2017). Caregiving responsibilities for a child, spouse or parent: The impact of care recipient independence on employee well-being. Journal of Vocational Behavior, 101, 57–66. https://​doi.​org/​10.​1016/​j.​jvb.​2017.​04.​006CrossRef
Zurück zum Zitat Bamberger, P. A. (2018). AMD—Clarifying what we are about and where we are going. Academy of Management Discoveries, 4(1), 1–10.CrossRef
Zurück zum Zitat Bergeron, D. M. (2007). The potential paradox of organizational citizenship behavior: Good citizens at what cost? Academy Of Management Review, 32(4), 1078–1095. https://​doi.​org/​10.​5465/​amr.​2007.​26585791CrossRef
Zurück zum Zitat Bergeron, D. M., Shipp, A. J., Rosen, B., & Furst, S. A. (2013). Organizational citizenship behavior and career outcomes: The cost of being a good citizen. Journal of Management, 39(4), 958–984. https://​doi.​org/​10.​1177/​0149206311407508​CrossRef
Zurück zum Zitat Bevan, R. (2023, October 8). Gollum Publisher used ChatGPT to apologize for awful launch, say sources. The Gamer. https://​www.​thegamer.​com/​the-lord-of-the-rings-gollum-game-apology-written-by-chatgpt/​
Zurück zum Zitat Blau, P. (1964). Exchange and power in social life. Wiley.
Zurück zum Zitat Bolino, M. C., Klotz, A. C., & Turnley, W. H. (2018). The unintended consequences of organizational citizenship behaviors. In P. M. Podsakoff, S. B. Mackenzie, & N. P. Podsakoff (Eds.), The Oxford handbook of organizational citizenship behavior (pp. 185–202). Oxford University Press.
Zurück zum Zitat Bottom, W. P., Holloway, J., Miller, G. J., Mislin, A., & Whitford, A. (2006). Building a pathway to cooperation: Negotiation and social exchange between principal and agent. Administrative Science Quarterly, 51(1), 29–58. https://​doi.​org/​10.​2189/​asqu.​51.​1.​29CrossRef
Zurück zum Zitat Bowling, N. A., Beehr, T. A., & Swader, W. M. (2005). Giving and receiving social support at work: The roles of personality and reciprocity. Journal of Vocational Behavior, 67(3), 476–489. https://​doi.​org/​10.​1016/​j.​jvb.​2004.​08.​004CrossRef
Zurück zum Zitat Brodsky, A. (2021). Virtual surface acting in workplace interactions: Choosing the best technology to fit the task. Journal Of Applied Psychology, 106(5), 714. https://​doi.​org/​10.​1037/​apl0000805CrossRefPubMed
Zurück zum Zitat Burke, R. J., Weir, T., & Duncan, G. (1976). Informal helping relationship in work organizations. Academy Of Management Journal, 19(3), 370–377. https://​doi.​org/​10.​5465/​255604CrossRef
Zurück zum Zitat Button, S. B. (2001). Organizational efforts to affirm sexual diversity: A cross-level examination. Journal Of Applied Psychology, 86(1), 17. https://​doi.​org/​10.​1037/​/​0021-9010.​86.​1.​17CrossRefPubMed
Zurück zum Zitat Cai, Z., Fan, X., & Du, J. (2017). Gender and attitudes toward technology use: A meta-analysis. Computers & Education, 105, 1–13.CrossRef
Zurück zum Zitat Chang, S., Luckett, T., Phillips, J., Agar, M., Lam, L., & DiGiacomo, M. (2023). Factors associated with being an older rather than younger unpaid carer of adults with a chronic health condition: Results from a population-based cross-sectional survey in South Australia. Chronic Illness, 19(1), 208–220. https://​doi.​org/​10.​1177/​1742395321105403​3CrossRefPubMed
Zurück zum Zitat Chernyak-Hai, L., Heller, D., SimanTov-Nachlieli, I., & Weiss-Sidi, M. (2023). Give them a fishing rod, if it is not urgent: The impact of help type on support for helpers’ leadership. Journal of Applied Psychology, 109(4), 551. https://​doi.​org/​10.​1037/​apl0001155CrossRefPubMed
Zurück zum Zitat Chernyak-Hai, L., Heller, D., SimanTov-Nachlieli, I., & Weiss-Sidi, M. (2024). Give them a fishing rod, if it is not urgent: The impact of help type on support for helpers’ leadership. Journal of Applied Psychology, 109(4), 551.CrossRefPubMed
Zurück zum Zitat Choi, J. H., & Schwarcz, D. (2023). Ai assistance in legal analysis: An empirical study. Available at SSRN 4539836.
Zurück zum Zitat Choudhury, P., Starr, E., & Agarwal, R. (2020). Machine learning and human capital complementarities: Experimental evidence on bias mitigation. Strategic Management Journal, 41(8), 1381–1411. https://​doi.​org/​10.​1002/​smj.​3152CrossRef
Zurück zum Zitat Christian, J., Christian, M. S., Garza, A. S., & Ellis, A. P. (2012). Examining retaliatory responses to justice violations and recovery attempts in teams. Journal Of Applied Psychology, 97(6), 1218. https://​doi.​org/​10.​1037/​a0029450CrossRefPubMed
Zurück zum Zitat Clair, J. A., Beatty, J. E., & Maclean, T. L. (2005). Out of sight but not out of mind: Managing invisible social identities in the workplace. Academy Of Management Review, 30(1), 78–95. https://​doi.​org/​10.​5465/​amr.​2005.​15281431CrossRef
Zurück zum Zitat Clark, K., & Sefton, M. (2001). The sequential prisoner’s dilemma: Evidence on reciprocation. The Economic Journal, 111(468), 51–68. https://​doi.​org/​10.​1111/​1468-0297.​00588CrossRef
Zurück zum Zitat Colquitt, J. A. (2008). From the editors publishing laboratory research in AMJ: A question of when, not if. Academy Of Management Journal, 51(4), 616–620. https://​doi.​org/​10.​5465/​amr.​2008.​33664717CrossRef
Zurück zum Zitat Colquitt, J. A., LePine, J. A., Piccolo, R. F., Zapata, C. P., & Rich, B. L. (2012). Explaining the justice–performance relationship: Trust as exchange deepener or trust as uncertainty reducer? Journal of Applied Psychology, 97(1), 1. https://​doi.​org/​10.​1037/​a0025208CrossRefPubMed
Zurück zum Zitat Connelly, B. L., Certo, S. T., Ireland, R. D., & Reutzel, C. R. (2011). Signaling theory: A review and assessment. Journal of Management, 37(1), 39–67. https://​doi.​org/​10.​1177/​0149206310388419​CrossRef
Zurück zum Zitat Cropanzano, R., & Mitchell, M. S. (2005). Social exchange theory: An interdisciplinary review. Journal of Management, 31(6), 874–900. https://​doi.​org/​10.​1177/​0149206305279602​CrossRef
Zurück zum Zitat Cuddy, A. J., Fiske, S. T., & Glick, P. (2007). The BIAS map: Behaviors from intergroup affect and stereotypes. Journal of Personality and Social Psychology, 92(4), 631. https://​doi.​org/​10.​1037/​0022-3514.​92.​4.​631CrossRefPubMed
Zurück zum Zitat Cuddy, A. J., Fiske, S. T., & Glick, P. (2008). Warmth and competence as universal dimensions of social perception: The stereotype content model and the BIAS map. Advances in Experimental Social Psychology, 40, 61–149. https://​doi.​org/​10.​1016/​s0065-2601(07)00002-0CrossRef
Zurück zum Zitat Cuddy, A. J., Glick, P., & Beninger, A. (2011). The dynamics of warmth and competence judgments, and their outcomes in organizations. Research in Organizational Behavior, 31, 73–98. https://​doi.​org/​10.​1016/​j.​riob.​2011.​10.​004CrossRef
Zurück zum Zitat Daft, R. L., & Lengel, R. H. (1986). Organizational information requirements, media richness and structural design. Management Science, 32(5), 554–571. https://​doi.​org/​10.​1287/​mnsc.​32.​5.​554CrossRef
Zurück zum Zitat Dalal, R. S., & Sheng, Z. (2019). When is helping behavior unhelpful? A conceptual analysis and research agenda. Journal of Vocational Behavior, 110, 272–285. https://​doi.​org/​10.​1016/​j.​jvb.​2018.​11.​009CrossRef
Zurück zum Zitat Deckop, J. R., Cirka, C. C., & Andersson, L. M. (2003). Doing unto others: The reciprocity of helping behavior in organizations. Journal of Business Ethics, 47, 101–113.CrossRef
Zurück zum Zitat Dell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., ... & Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Technology & Operations Mgt. Unit Working Paper (24–013).
Zurück zum Zitat Detert, J. R., Burris, E. R., Harrison, D. A., & Martin, S. R. (2013). Voice flows to and around leaders: Understanding when units are helped or hurt by employee voice. Administrative Science Quarterly, 58(4), 624–668. https://​doi.​org/​10.​1177/​0001839213510151​CrossRef
Zurück zum Zitat Dierdorff, E. C., & Rubin, R. S. (2022). Revisiting reciprocity: How accountability, proactivity, and interpersonal skills shape obligations to reciprocate citizenship behavior. Journal of Business and Psychology, 37(2), 263–281. https://​doi.​org/​10.​1007/​s10869-021-09743-6CrossRef
Zurück zum Zitat Dietvorst, B. J., & Bharti, S. (2020). People reject algorithms in uncertain decision domains because they have diminishing sensitivity to forecasting error. Psychological Science, 31(10), 1302–1314.CrossRefPubMed
Zurück zum Zitat Dineen, B. R., Lewicki, R. J., & Tomlinson, E. C. (2006). Supervisory guidance and behavioral integrity: Relationships with employee citizenship and deviant behavior. Journal Of Applied Psychology, 91(3), 622. https://​doi.​org/​10.​1037/​0021-9010.​91.​3.​622CrossRefPubMed
Zurück zum Zitat Donath, J. (2007). Signals in social supernets. Journal of Computer-Mediated Communication, 13(1), 231–251. https://​doi.​org/​10.​1111/​j.​1083-6101.​2007.​00394.​xCrossRef
Zurück zum Zitat Duan, J., Wang, X. H., Janssen, O., & Farh, J. L. (2022). Transformational leadership and voice: When does felt obligation to the leader matter?. Journal of Business and Psychology, 1–13. https://​doi.​org/​10.​1007/​s10869-021-09758-z
Zurück zum Zitat Duck, S. (1985). Social and personal relationships. In M. L. Miller & G. R. Knapp (Eds.), Handbook of interpersonal communication (pp. 655–786). Sage.
Zurück zum Zitat Ehrhart, M. G. (2018). Helping in organizations: A review and directions for future research. In P. M. Podsakoff, S. B. Mackenzie, & N. P. Podsakoff (Eds.), The Oxford handbook of organizational citizenship behavior (pp. 475–505). Oxford University Press.
Zurück zum Zitat Eisenberger, R., Armeli, S., Rexwinkel, B., Lynch, P. D., & Rhoades, L. (2001). Reciprocation of perceived organizational support. Journal Of Applied Psychology, 86(1), 42. https://​doi.​org/​10.​1037/​0021-9010.​86.​1.​42CrossRefPubMed
Zurück zum Zitat Eisenbruch, A. B., & Krasnow, M. M. (2022). Why warmth matters more than competence: A new evolutionary approach. Perspectives on Psychological Science, 17(6), 1604–1623. https://​doi.​org/​10.​1177/​1745691621107108​7CrossRefPubMed
Zurück zum Zitat Emerson, R. M. (1976). Social exchange theory. Annual Review of Sociology, 2, 335–362.CrossRef
Zurück zum Zitat Farh, J. L., Earley, P. C., & Lin, S. C. (1997). Impetus for action: A cultural analysis of justice and organizational citizenship behavior in Chinese society. Administrative Science Quarterly. https://​doi.​org/​10.​2307/​2393733CrossRef
Zurück zum Zitat Fiske, S. T. (2018). Stereotype content: Warmth and competence endure. Current Directions in Psychological Science, 27(2), 67–73. https://​doi.​org/​10.​1177/​0963721417738825​CrossRefPubMedPubMedCentral
Zurück zum Zitat Fiske, S. T., Cuddy, A. J. C., Glick, P., & Xu, J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82, 878–902. https://​doi.​org/​10.​1037/​0022-3514.​82.​6.​878CrossRefPubMed
Zurück zum Zitat Fiske, S. T., Cuddy, A. J., & Glick, P. (2007). Universal dimensions of social cognition: Warmth and competence. Trends in cognitive sciences, 11(2), 77–83.CrossRefPubMed
Zurück zum Zitat Frank, M. G., Ekman, P., & Friesen, W. V. (1993). Behavioral markers and recognizability of the smile of enjoyment. Journal of Personality and Social Psychology, 64(1), 83. https://​doi.​org/​10.​1037/​/​0022-3514.​64.​1.​83CrossRefPubMed
Zurück zum Zitat Gabriel, A. S., Arena Jr, D. F., Calderwood, C., Campbell, J. T., Chawla, N., Corwin, E. S., ... & Zipay, K. P. (2022). Building thriving workforces from the top down: A call and research agenda for organizations to proactively support employee well-being. Research in Personnel and Human Resources Management, 205–272. https://​doi.​org/​10.​1108/​s0742-7301202200000400​07
Zurück zum Zitat Garba, O. A., Babalola, M. T., & Guo, L. (2018). A social exchange perspective on why and when ethical leadership foster customer-oriented citizenship behavior. International Journal of Hospitality Management, 70, 1–8. https://​doi.​org/​10.​1016/​j.​ijhm.​2017.​10.​018CrossRef
Zurück zum Zitat Goldsmith, D. J., & Fitch, K. (1997). The normative context of advice as social support. Human Communication Research, 23(4), 454–476. https://​doi.​org/​10.​1111/​j.​1468-2958.​1997.​tb00406.​xCrossRef
Zurück zum Zitat Goldstein, N. J., Griskevicius, V., & Cialdini, R. B. (2011). Reciprocity by proxy: A novel influence strategy for stimulating cooperation. Administrative Science Quarterly, 56(3), 441–473. https://​doi.​org/​10.​1177/​0001839211435904​CrossRef
Zurück zum Zitat Gouldner, A. W. (1960). The norm of reciprocity: A preliminary statement. American Sociological Review. https://​doi.​org/​10.​2307/​2092623CrossRef
Zurück zum Zitat Grandey, A. A., Fisk, G. M., Mattila, A. S., Jansen, K. J., & Sideman, L. A. (2005). Is “service with a smile” enough? Authenticity of positive displays during service encounters. Organizational Behavior and Human Decision Processes, 96(1), 38–55. https://​doi.​org/​10.​1016/​j.​obhdp.​2004.​08.​002CrossRef
Zurück zum Zitat Halbesleben, J. R., & Wheeler, A. R. (2011). I owe you one: Coworker reciprocity as a moderator of the day-level exhaustion–performance relationship. Journal of Organizational Behavior, 32(4), 608–626. https://​doi.​org/​10.​1002/​job.​748CrossRef
Zurück zum Zitat Hancock, J. T., Naaman, M., & Levy, K. (2020). AI-mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25(1), 89–100. https://​doi.​org/​10.​1093/​jcmc/​zmz022CrossRef
Zurück zum Zitat Harari, D., Parke, M. R., & Marr, J. C. (2022). When helping hurts helpers: Anticipatory versus reactive helping, helper’s relative status, and recipient self-threat. Academy of Management Journal, 65(6), 1954–1983. https://​doi.​org/​10.​5465/​amj.​2019.​0049CrossRef
Zurück zum Zitat Hatfield, E., Walster, G. W., & Berscheid, E. (1978). Equity: Theory and research. Allyn & Bacon.
Zurück zum Zitat Heilman, M. E. (2012). Gender stereotypes and workplace bias. Research in Organizational Behavior, 32, 113–135. https://​doi.​org/​10.​1016/​j.​riob.​2012.​11.​003CrossRef
Zurück zum Zitat Huang, J. L., Cropanzano, R., Li, A., Shao, P., Zhang, X. A., & Li, Y. (2017). Employee conscientiousness, agreeableness, and supervisor justice rule compliance: A three-study investigation. Journal Of Applied Psychology, 102(11), 1564. https://​doi.​org/​10.​1037/​apl0000248CrossRefPubMed
Zurück zum Zitat Hughes, I. M., Freier, L. M., & Barratt, C. L. (2022). “Your help isn’t helping me!” Unhelpful workplace social support, strain, and the role of individual differences. Occupational Health Science, 6(3), 387–423. https://​doi.​org/​10.​1007/​s41542-022-00115-xCrossRefPubMedPubMedCentral
Zurück zum Zitat Inagaki, T. K., & Eisenberger, N. I. (2013). Shared neural mechanisms underlying social warmth and physical warmth. Psychological Science, 24(11), 2272–2280. https://​doi.​org/​10.​1177/​0956797613492773​CrossRefPubMed
Zurück zum Zitat Jones, K. P., King, E. B., Gilrane, V. L., McCausland, T. C., Cortina, J. M., & Grimm, K. J. (2016). The baby bump: Managing a dynamic stigma over time. Journal of Management, 42(6), 1530–1556. https://​doi.​org/​10.​1177/​0149206313503012​CrossRef
Zurück zum Zitat Judd, C. M., James-Hawkins, L., Yzerbyt, V., & Kashima, Y. (2005). Fundamental dimensions of social judgment: understanding the relations between judgments of competence and warmth. Journal of Personality and Social Psychology, 89, 899–913.CrossRefPubMed
Zurück zum Zitat Kervyn, N., Fiske, S. T., & Malone, C. (2022). Social perception of brands: Warmth and competence define images of both brands and social groups. Consumer Psychology Review, 5(1), 51–68. https://​doi.​org/​10.​1002/​arcp.​1074CrossRef
Zurück zum Zitat King, S., & Forlizzi, J. (2007, August). Slow messaging: Intimate communication for couples living at a distance. In Proceedings of the 2007 conference on Designing Pleasurable Products and Interfaces (pp. 451–454).
Zurück zum Zitat Konovsky, M. A., & Pugh, S. D. (1994). Citizenship behavior and social exchange. Academy of Management Journal, 37(3), 656–669. https://​doi.​org/​10.​5465/​256704CrossRefPubMed
Zurück zum Zitat Korsgaard, M. A., Brower, H. H., & Lester, S. W. (2015). It isn’t always mutual: A critical review of dyadic trust. Journal of Management, 41(1), 47–70. https://​doi.​org/​10.​1177/​0149206314547521​CrossRef
Zurück zum Zitat Korsgaard, M. A., Meglino, B. M., Lester, S. W., & Jeong, S. S. (2010). Paying you back or paying me forward: Understanding rewarded and unrewarded organizational citizenship behavior. Journal Of Applied Psychology, 95(2), 277. https://​doi.​org/​10.​1037/​a0018137CrossRefPubMed
Zurück zum Zitat Lah, U., Lewis, J. R., & Šumak, B. (2020). Perceived usability and the modified technology acceptance model. International Journal of Human-Computer Interaction, 36(13), 1216–1230. https://​doi.​org/​10.​1080/​10447318.​2020.​1727262CrossRef
Zurück zum Zitat Landis, B., Fisher, C. M., & Menges, J. I. (2022). How employees react to unsolicited and solicited advice in the workplace: Implications for using advice, learning, and performance. Journal of Applied Psychology, 107(3), 408.CrossRefPubMed
Zurück zum Zitat Lee, H. W., Bradburn, J., Johnson, R. E., Lin, S. H. J., & Chang, C. H. D. (2019). The benefits of receiving gratitude for helpers: A daily investigation of proactive and reactive helping at work. Journal of Applied Psychology, 104(2), 197. https://​doi.​org/​10.​1037/​apl0000346CrossRefPubMed
Zurück zum Zitat Lee, K., & Allen, N. J. (2002). Organizational citizenship behavior and workplace deviance: The role of affect and cognitions. Journal Of Applied Psychology, 87(1), 131. https://​doi.​org/​10.​1037/​0021-9010.​87.​1.​131CrossRefPubMed
Zurück zum Zitat Lee, Y. E., Simon, L. S., Koopman, J., Rosen, C. C., Gabriel, A. S., & Yoon, S. (2023). When, why, and for whom is receiving help actually helpful? Differential effects of receiving empowering and nonempowering help based on recipient gender. Journal Of Applied Psychology, 108(5), 773. https://​doi.​org/​10.​1037/​apl0001049CrossRefPubMed
Zurück zum Zitat Lennard, A. C., & Van Dyne, L. (2018). Helping that hurts intended beneficiaries: A new perspective on the dark side of helping organizational citizenship behavior. In P. M. Podsakoff, S. B. Mackenzie, & N. P. Podsakoff (Eds.), The Oxford handbook of organizational citizenship behavior. Oxford Library of Psychology: Oxford University Press.
Zurück zum Zitat Lester, S. W., Meglino, B. M., & Korsgaard, M. A. (2008). The role of other orientation in organizational citizenship behavior. Journal of Organizational Behavior, 29(6), 829–841. https://​doi.​org/​10.​1002/​job.​504CrossRef
Zurück zum Zitat Levine, E. E., & Schweitzer, M. E. (2015). The affective and interpersonal consequences of obesity. Organizational Behavior And Human Decision Processes, 127, 66–84. https://​doi.​org/​10.​1016/​j.​obhdp.​2015.​01.​002CrossRef
Zurück zum Zitat Liu, B., Kang, J., & Wei, L. (2023). Artificial intelligence and perceived effort in relationship maintenance: Effects on relationship satisfaction and uncertainty. Journal of Social and Personal Relationships, 41(5), 1232–1252. https://​doi.​org/​10.​1177/​0265407523118989​9CrossRef
Zurück zum Zitat Liu, Y., Mittal, A., Yang, D., & Bruckman, A. (2022, April). Will AI console me when I lose my pet? Understanding perceptions of AI-mediated email writing. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (pp. 1–13).
Zurück zum Zitat Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103.CrossRef
Zurück zum Zitat Lorinkova, N. M., & Perry, S. J. (2019). The importance of group-focused transformational leadership and felt obligation for helping and group performance. Journal of Organizational Behavior, 40(3), 231–247. https://​doi.​org/​10.​1002/​job.​2322CrossRef
Zurück zum Zitat MacKenzie, S. B., Podsakoff, N. P., & Podsakoff, P. M. (2018). Individual and organizational level consequences of organizational citizenship behaviors. In P. M. Podsakoff, S. B. Mackenzie, & N. P. Podsakoff (Eds.), The Oxford handbook of organizational citizenship behavior (pp. 105–148). Oxford University Press.
Zurück zum Zitat MacKinnon, D. P., Lockwood, C. M., & Williams, J. (2004). Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behavioral Research, 39(1), 99–128. https://​doi.​org/​10.​1207/​s15327906mbr3901​_​4CrossRefPubMedPubMedCentral
Zurück zum Zitat Makridakis, S. (2017). The forthcoming artificial intelligence (AI) revolution: Its impact on society and firms. Futures, 90, 46–60. https://​doi.​org/​10.​1016/​j.​futures.​2017.​03.​006CrossRef
Zurück zum Zitat Mansson, D. H., & Myers, S. A. (2011). An initial examination of college students’ expressions of affection through Facebook. Southern Communication Journal, 76(2), 155–168. https://​doi.​org/​10.​1080/​1041794090331771​0CrossRef
Zurück zum Zitat Maruping, L. M., & Agarwal, R. (2004). Managing team interpersonal processes through technology: A task-technology fit perspective. Journal Of Applied Psychology, 89(6), 975. https://​doi.​org/​10.​1037/​0021-9010.​89.​6.​975CrossRefPubMed
Zurück zum Zitat Masterson, S. S., Lewis, K., Goldman, B. M., & Taylor, M. S. (2000). Integrating justice and social exchange: The differing effects of fair procedures and treatment on work relationships. Academy of Management Journal, 43(4), 738–748. https://​doi.​org/​10.​5465/​1556364CrossRef
Zurück zum Zitat Mayer, R. C., & Davis, J. H. (1999). The effect of the performance appraisal system on trust for management: A field quasi-experiment. Journal Of Applied Psychology, 84(1), 123. https://​doi.​org/​10.​1037/​0021-9010.​84.​1.​123CrossRef
Zurück zum Zitat McNeely, B. L., & Meglino, B. M. (1994). The role of dispositional and situational antecedents in prosocial organizational behavior: An examination of the intended beneficiaries of prosocial behavior. Journal Of Applied Psychology, 79(6), 836. https://​doi.​org/​10.​1037/​0021-9010.​79.​6.​836CrossRef
Zurück zum Zitat Mueller, E. L., Cochrane, A. R., Campbell, M. E., Nikkhah, S., & Miller, A. D. (2022). An mHealth app to support caregivers in the medical management of their child with cancer: Co-design and user testing study. JMIR Cancer, 8(1), Article e33152. https://​doi.​org/​10.​2196/​33152CrossRefPubMedPubMedCentral
Zurück zum Zitat Nadler, A., & Chernyak-Hai, L. (2014). Helping them stay where they are: Status effects on dependency/autonomy-oriented helping. Journal of Personality and Social Psychology, 106(1), 58. https://​doi.​org/​10.​1037/​a0034152CrossRefPubMed
Zurück zum Zitat Nguyen, H. H. D., & Ryan, A. M. (2008). Does stereotype threat affect test performance of minorities and women? A meta-analysis of experimental evidence. Journal Of Applied Psychology, 93(6), 1314. https://​doi.​org/​10.​1037/​a0012702CrossRefPubMed
Zurück zum Zitat Nikkhah, S. (2021, October). Family resilience technologies: Designing collaborative technologies for caregiving coordination in the children’s hospital. In Companion Publication of the 2021 Conference on Computer Supported Cooperative Work and Social Computing (pp. 279–282).
Zurück zum Zitat Nikkhah, S., John, S., Yalamarti, K. S., Mueller, E. L., & Miller, A. D. (2022). Family Care Coordination in the Children's Hospital: Phases and Cycles in the Pediatric Cancer Caregiving Journey. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW2), 1–30.
Zurück zum Zitat Organ, D. W. (1988). Organizational citizenship behavior: The good soldier syndrome. Lexington Books.
Zurück zum Zitat Organ, D. W. (1990). The motivational basis of organizational citizenship behavior. Research in Organizational Behavior, 12(1), 43–72.
Zurück zum Zitat Organ, D. W., Podsakoff, P., & MacKenzie, S. (2006). Organizational Citizenship Behavior: Its Nature, Antecedents, and Consequences. Sage Publ.
Zurück zum Zitat Pan, W., Sun, L. Y., & Chow, I. H. S. (2012). Leader-member exchange and employee creativity: Test of a multilevel moderated mediation model. Human Performance, 25(5), 432–451. https://​doi.​org/​10.​1080/​08959285.​2012.​721833CrossRef
Zurück zum Zitat Parasuraman, A., & Colby, C. L. (2015). An updated and streamlined technology readiness index: TRI 2.0. Journal of Service Research, 18(1), 59–74. https://​doi.​org/​10.​1177/​1094670514539730​CrossRef
Zurück zum Zitat Perugini, M., Gallucci, M., Presaghi, F., & Ercolani, A. P. (2003). The personal norm of reciprocity. European Journal of Personality, 17(4), 251–283. https://​doi.​org/​10.​1002/​per.​474CrossRef
Zurück zum Zitat Podsakoff, N. P., Whiting, S. W., Podsakoff, P. M., & Blume, B. D. (2009). Individual-and organizational-level consequences of organizational citizenship behaviors: A meta-analysis. Journal Of Applied Psychology, 94(1), 122. https://​doi.​org/​10.​1037/​a0013079CrossRefPubMed
Zurück zum Zitat Podsakoff, P. M., MacKenzie, S. B., Moorman, R. H., & Fetter, R. (1990). Transformational leader behaviors and their effects on followers’ trust in leader, satisfaction, and organizational citizenship behaviors. The Leadership Quarterly, 1(2), 107–142. https://​doi.​org/​10.​1016/​1048-9843(90)90009-7CrossRef
Zurück zum Zitat Podsakoff, P. M., MacKenzie, S. B., Paine, J. B., & Bachrach, D. G. (2000). Organizational citizenship behaviors: A critical review of the theoretical and empirical literature and suggestions for future research. Journal of Management, 26(3), 513–563. https://​doi.​org/​10.​1177/​0149206300026003​07CrossRef
Zurück zum Zitat Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology, 63, 539–569. https://​doi.​org/​10.​1146/​annurev-psych-120710-100452CrossRefPubMed
Zurück zum Zitat Preacher, K. J., Zyphur, M. J., & Zhang, Z. (2010). A general multilevel SEM framework for assessing multilevel mediation. Psychological Methods, 15(3), 209. https://​doi.​org/​10.​1037/​a0020141CrossRefPubMed
Zurück zum Zitat Riche, Y., Henry Riche, N., Isenberg, P., & Bezerianos, A. (2010). Hard-to-use interfaces considered beneficial (some of the time). In G. Fitzpatrick, S. Hudson, K. Edwards, & T. Rodden (Eds.), CHI’10 Extended abstracts on human factors in computing systems (pp. 2705–2714). Association for Computing Machinery.
Zurück zum Zitat Rusbult, C. E. (1980). Commitment and satisfaction in romantic associations: A test of the investment model. Journal of Experimental Social Psychology, 16(2), 172–186. https://​doi.​org/​10.​1016/​0022-1031(80)90007-4CrossRef
Zurück zum Zitat Rusbult, C. E., Kumashiro, M., Kubacka, K. E., & Finkel, E. J. (2012). The Michelangelo phenomenon. In M. P. Zanna (Ed.), Advances in Experimental Social Psychology (Vol. 44, pp. 293–342). Academic Press.
Zurück zum Zitat Saville, J. D., & Foster, L. L. (2021). Does technology self-efficacy influence the effect of training presentation mode on training self-efficacy? Computers in Human Behavior Reports, 4, Article 100124. https://​doi.​org/​10.​1016/​j.​chbr.​2021.​100124CrossRef
Zurück zum Zitat Sawyer, K. B., Thoroughgood, C. N., Stillwell, E. E., Duffy, M. K., Scott, K. L., & Adair, E. A. (2022). Being present and thankful: A multi-study investigation of mindfulness, gratitude, and employee helping behavior. Journal of Applied Psychology, 107(2), 240. https://​doi.​org/​10.​1037/​apl0000903CrossRefPubMed
Zurück zum Zitat Schurgin, M., Schlager, M., Vardoulakis, L., Pina, L. R., & Wilcox, L. (2021, May). Isolation in coordination: Challenges of caregivers in the USA. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–14).
Zurück zum Zitat Scott, S., & Orlikowski, W. (2022). The digital undertow: How the corollary effects of digital transformation affect industry standards. Information Systems Research, 33(1), 311–336. https://​doi.​org/​10.​1287/​isre.​2021.​1056CrossRef
Zurück zum Zitat Settoon, R. P., & Mossholder, K. W. (2002). Relationship quality and relationship context as antecedents of person-and task-focused interpersonal citizenship behavior. Journal Of Applied Psychology, 87(2), 255. https://​doi.​org/​10.​1037/​0021-9010.​87.​2.​255CrossRefPubMed
Zurück zum Zitat Sigman, S. J. (1991). Handling the discontinuous aspects of continuous social relationships: Toward research on the persistence of social forms. Communication Theory, 1(2), 106–127. https://​doi.​org/​10.​1111/​j.​1468-2885.​1991.​tb00008.​xCrossRef
Zurück zum Zitat Simon, J. C., Styczynski, N., & Gutsell, J. N. (2020). Social perceptions of warmth and competence influence behavioral intentions and neural processing. Cognitive, Affective, & Behavioral Neuroscience, 20, 265–275. https://​doi.​org/​10.​3758/​s13415-019-00767-3CrossRef
Zurück zum Zitat Smith, C. A., Organ, D. W., & Near, J. P. (1983). Organizational citizenship behavior: Its nature and antecedents. Journal Of Applied Psychology, 68(4), 653. https://​doi.​org/​10.​1037/​0021-9010.​68.​4.​653CrossRef
Zurück zum Zitat Snijders, T. A., & Bosker, R. J. (1994). Modeled variance in two-level models. Sociological Methods & Research, 22(3), 342–363. https://​doi.​org/​10.​1177/​0049124194022003​004CrossRef
Zurück zum Zitat Sosik, V. S., & Bazarova, N. N. (2014). Relational maintenance on social network sites: How Facebook communication predicts relational escalation. Computers in Human Behavior, 35, 124–131. https://​doi.​org/​10.​1016/​j.​chb.​2014.​02.​044CrossRef
Zurück zum Zitat Sun, K., Zheng, X., & Liu, W. (2023). Increasing clinical medical service satisfaction: An investigation into the impacts of physicians’ use of clinical decision-making support AI on patients’ service satisfaction. International Journal of Medical Informatics, 176, Article 105107. https://​doi.​org/​10.​1016/​j.​ijmedinf.​2023.​105107CrossRefPubMed
Zurück zum Zitat Tai, K., Lin, K. J., Lam, C. K., & Liu, W. (2023). Biting the hand that feeds: A status-based model of when and why receiving help motivates social undermining. Journal Of Applied Psychology, 108(1), 27. https://​doi.​org/​10.​1037/​apl0000580CrossRefPubMed
Zurück zum Zitat Tang, C., Chen, Y., Cheng, K., Ngo, V., & Mattison, J. E. (2018). Awareness and handoffs in home care: Coordination among informal caregivers. Behaviour & Information Technology, 37(1), 66–86. https://​doi.​org/​10.​1080/​0144929x.​2017.​1405073CrossRef
Zurück zum Zitat Tang, P. M., Koopman, J., Mai, K. M., De Cremer, D., Zhang, J. H., Reynders, P., ... & Chen, I. (2023). No person is an island: Unpacking the work and after-work consequences of interacting with artificial intelligence. Journal of Applied Psychology, 108(11), 1766–1789. https://​doi.​org/​10.​1037/​apl0001103
Zurück zum Zitat Taylor, S. H., Zhao, P., & Bazarova, N. N. (2022). Social media and close relationships: A puzzle of connection and disconnection. Current Opinion in Psychology, 45, 101292. https://​doi.​org/​10.​1016/​j.​copsyc.​2021.​12.​004CrossRefPubMed
Zurück zum Zitat Thompson, P. S., & Bolino, M. C. (2018). Negative beliefs about accepting coworker help: Implications for employee attitudes, job performance, and reputation. Journal Of Applied Psychology, 103(8), 842. https://​doi.​org/​10.​1037/​apl0000300CrossRefPubMed
Zurück zum Zitat Toegel, G., Kilduff, M., & Anand, N. (2013). Emotion helping by managers: An emergent understanding of discrepant role expectations and outcomes. Academy of Management Journal, 56(2), 334–357. https://​doi.​org/​10.​5465/​amj.​2010.​0512CrossRef
Zurück zum Zitat Tong, S., & Walther, J. B. (2011). Relational maintenance and CMC. In K. B. Wright & L. M. Webb (Eds.), Computer-mediated communication in personal relationships (pp. 98–118). Peter Lang.
Zurück zum Zitat Vanneste, B. S., & Puranam, P. (2024). Artificial intelligence, trust, and perceptions of agency. Academy of Management Review. https://​doi.​org/​10.​5465/​amr.​2022.​0041CrossRef
Zurück zum Zitat Von Krogh, G. (2018). Artificial intelligence in organizations: New opportunities for phenomenon-based theorizing. Academy of Management Discoveries, 4(4), 404–409. https://​doi.​org/​10.​5465/​amd.​2018.​0084CrossRef
Zurück zum Zitat Walther, J. B. (2011). Theories of computer-mediated communication and interpersonal relations. The Handbook of Interpersonal Communication, 4, 443–479.
Zurück zum Zitat Wang, Z., Mao, H., Li, Y. J., & Liu, F. (2017). Smile big or not? Effects of smile intensity on perceptions of warmth and competence. Journal of Consumer Research, 43(5), 787–805. https://​doi.​org/​10.​1093/​jcr/​ucw062CrossRef
Zurück zum Zitat Wei, L., Kang, J., & Liu, B. (2023). What if I use help for this? Exploring normative evaluations of relationship maintenance behaviors augmented by external agency. Hawaii International Conference on System Sciences, 2441–2450.
Zurück zum Zitat Wellman, N., Mayer, D. M., Ong, M., & DeRue, D. S. (2016). When are do-gooders treated badly? Legitimate power, role expectations, and reactions to moral objection in organizations. Journal Of Applied Psychology, 101(6), 793. https://​doi.​org/​10.​1037/​apl0000094CrossRefPubMed
Zurück zum Zitat Wojciszke, B. (1994). Multiple meanings of behavior: Construing actions in terms of competence or morality. Journal of Personality and Social Psychology, 67(2), 222. https://​doi.​org/​10.​1037/​0022-3514.​67.​2.​222CrossRef
Zurück zum Zitat Wojciszke, B., Bazinska, R., & Jaworski, M. (1998). On the dominance of moral categories in impression formation. Personality and Social Psychology Bulletin, 24(12), 1251–1263. https://​doi.​org/​10.​1177/​0146167298241200​1CrossRef
    Bildnachweise
    Schmalkalden/© Schmalkalden, NTT Data/© NTT Data, Verlagsgruppe Beltz/© Verlagsgruppe Beltz, EGYM Wellpass GmbH/© EGYM Wellpass GmbH, rku.it GmbH/© rku.it GmbH, zfm/© zfm, ibo Software GmbH/© ibo Software GmbH, Lorenz GmbH/© Lorenz GmbH, Axians Infoma GmbH/© Axians Infoma GmbH, genua GmbH/© genua GmbH, Prosoz Herten GmbH/© Prosoz Herten GmbH, Stormshield/© Stormshield, MACH AG/© MACH AG, OEDIV KG/© OEDIV KG, Rundstedt & Partner GmbH/© Rundstedt & Partner GmbH