Finds documents with both search terms in any word order, permitting "n" words as a maximum distance between them. Best choose between 15 and 30 (e.g. NEAR(recruit, professionals, 20)).
Finds documents with the search term in word versions or composites. The asterisk * marks whether you wish them BEFORE, BEHIND, or BEFORE and BEHIND the search term (e.g. lightweight*, *lightweight, *lightweight*).
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence.
powered by
Select sections of text to find additional relevant content using AI-assisted search.
powered by
(Link opens in a new window)
Abstract
This article examines the transformative impact of Generative Artificial Intelligence (GenAI) on workplace dynamics, focusing on how these tools are perceived and integrated as 'AI teammates'. It explores the sensemaking and sensegiving processes that shape user interactions with GenAI, highlighting the emotional, cognitive, and social dimensions of adoption. The study introduces the Envision-Evolve-Engage (3E) framework, which captures the recursive nature of user engagement with GenAI tools. Key findings include the role of AI mindfulness in shaping user interactions, the importance of trust and credibility in signaling, and the impact of affective commitment and social influence on energizing. The article also discusses practical implications for organizations, emphasizing the need for inclusive governance strategies and participatory training to ensure responsible and effective GenAI adoption. Additionally, it highlights the potential of GenAI to advance Sustainable Development Goal 8 (Decent Work and Economic Growth) by enhancing workplace accessibility and supporting continuous learning and skill enhancement.
AI Generated
This summary of the content was generated with the help of AI.
Abstract
Generative AI (GenAI) applications, such as ChatGPT, are increasingly shaping work practices and employee engagement in organizations. Understanding how employees interact with these tools is critical for designing effective and responsible AI-enabled workplaces. This study analyzes 443,338 user reviews from the Google Play Store to examine how GenAI tools influence user satisfaction, continued use and their behaviors, which in turn impact productivity and well-being. Drawing on Sensemaking and Sensegiving theories, we develop a four-stage framework integrated into a 3E model (Envision-Evolve-Engage) comprising seven propositions. Findings highlight GenAI’s potential to enhance workplace effectiveness, decision-making and employee well-being, and to advance Sustainable Development Goal 8 (SDG 8) by promoting productive, inclusive, and meaningful work. The study also identifies challenges related to trust, privacy, adaptability, and ethical use. These insights offer practical guidance for designing user-centric GenAI systems and provide a theory-driven perspective for supporting responsible adoption and engagement in workplace contexts.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
1 Introduction
"Making humans better at work and work better for humans™”1
The integration of intelligent systems into the workplace is undergoing a profound transformation. The widespread adoption of Generative Artificial Intelligence (GenAI) is no longer limited to automating routine tasks or supporting isolated workflows (Amankwah-Amoah et al., 2024; Benbya et al., 2024; Brynjolfsson et al., 2025). These technologies are now emerging as collaborative agents, referred to as “AI teammates”, where machines complement rather than replace human capabilities (Flathmann et al., 2023). This shift signifies a redefinition of how work is conceptualized, organized, and experienced. As “AI teammates,” these systems are increasingly integrated into work processes and can interact with humans in real time. Organizations are witnessing a shift toward collaborative intelligence, wherein humans and AI systems co-create value through interaction and mutual learning (Nguyen et al., 2025). The GenAI market is expected to reach USD 66.89 billion by 2025 and grow to USD 442.07 billion by 2031 at a CAGR of 36.99% (Statista Market Insights, 2025). Furthermore, it is projected that by 2028, an estimated 75% of enterprise software engineers will use AI code assistants (Stamford, 2024). A report by Goldman Sachs estimates that GenAI could impact up to 300 million jobs globally, with 7% of roles potentially automated, 63% augmented and 30% remaining unaffected (Goldman Sachs, 2023). These figures highlight the growing importance of GenAI not only as a technological innovation but also as a catalyst reshaping the nature of work itself.
Advertisement
GenAI tools, with capabilities that produce contextually relevant, human-like responses, make them both accessible and intuitive for non-technical users (Banh & Strobel, 2023; Feuerriegel et al., 2024). Their use in content generation, dialogue simulation, and decision support is transforming work by enabling more adaptive and innovative organizations (Banh & Strobel, 2023; Kumar et al., 2023; Talwar et al., 2022). Unlike earlier AI systems that required code-intensive interfaces, contemporary GenAI-based tools such as ChatGPT, Google Duet AI, and Microsoft Copilot democratize access, offering professionals real-time support for tasks like document drafting, coding, and knowledge-intensive decision-making (Bankins et al., 2024; Hönigsberg et al., 2025; Jarrahi, 2018; Langer & Landers, 2021). Their integration has shown productivity gains, for instance, a 15% improvement in customer service among less experienced workers (Brynjolfsson et al., 2025). Yet it also raises concerns about job displacement, skill obsolescence, and ethical risks (Amankwah-Amoah et al., 2024; Bankins et al., 2024; Brynjolfsson et al., 2025). Studies highlight mixed outcomes: while GenAI can enhance collaboration, it may also exacerbate alienation under high digital demands (Hai et al., 2025). These contrasting findings suggest that the impact of GenAI depends not only on technical sophistication but also on how they are perceived, implemented, and integrated into routine work practices.
Despite the growing academic interest in GenAI adoption, much of the existing literature emphasizes macro-level outcomes such as labor market shifts, productivity gains, and strategic capabilities (Brynjolfsson et al., 2025; Hui et al., 2024; Jia et al., 2024). While valuable, these studies often overlook micro-level user interactions that shape everyday work practices. Earlier research has examined gender disparities in AI adoption (Bankins et al., 2024) and the psychological effects of automation (Benbya et al., 2024), but much of it predates GenAI and thus fails to capture the unique socio-cognitive dynamics introduced by large language models. Experimental studies highlight efficiency gains from tools like GitHub Copilot and ChatGPT (Brynjolfsson et al., 2025), yet such controlled settings overlook the interpretive and relational processes through which employees engage with GenAI in practice. Although work on AI-based teammates has emerged (Maedche et al., 2019; Siemon, 2022), there remains limited empirical research on how users construct and share meanings around GenAI in real-world contexts. This study addresses this gap by examining 443,338 user reviews of ChatGPT through the lenses of sensemaking and sensegiving. By doing so, it highlights not only how individuals interpret GenAI but also how they shape others’ expectations, offering a socio-cognitive perspective on adoption in practice. ChatGPT was chosen as the focus because of its versatility, accessibility, and widespread adoption, with 52 million global downloads in April 2025 (Backlinko, 2024). Adopting a design science approach, the study employs Natural Language Processing (NLP) techniques, including topic modeling, k-means, and hierarchical clustering, to identify common patterns in user experiences. The analysis is guided by the following broad research question:
How do employees’ sensemaking and sensegiving shape their engagement with GenAI at work?
To answer this, the study uses the sensemaking-sensegiving frameworks as interpretive lenses. Topic modeling and clustering provide an empirical foundation by surfacing latent themes, which are then interpreted through sensemaking and sensegiving to build a socio-cognitive account of GenAI adoption. Compared to dominant models such as Job Characteristics, Conservation of Resources, and Resource Orchestration (Bankins & Formosa, 2023; Dwivedi et al., 2024; Peretz-Andersson et al., 2024), this interpretive-computational integration offers a distinct contribution by centering human meaning-making and influence processes. By proposing the Envision-Evolve-Engage (3E) framework, this research uniquely synthesizes cognitive, emotional, and relational factors, such as perceived autonomy, trust in AI, and AI mindfulness while working with AI teammates and adapting to evolving job roles. While earlier studies have addressed some of these elements (Cheng et al., 2022; Kong et al., 2024; Ulfert et al., 2024), few have combined them into a unified, empirically grounded framework that captures how employees engage with GenAI in everyday work settings.
This study offers multiple contributions to both academic literature and practice. First, it advances understanding by applying sensemaking and sensegiving to large-scale user-generated data, demonstrating how individuals construct and communicate meaning in real-world GenAI use through continuous engagement. It enriches the Human-AI interaction literature by revealing how users develop emotional attachment, trust and adaptive routines with GenAI, emphasizing the cognitive and social adaptation required in the future of work. Second, it offers practical implications through the Envision-Evolve-Engage (3E) framework, which provides actionable guidance for designing user-centered, emotionally intelligent GenAI tools that promote sustainable and inclusive work. This directly supports UN Sustainable Development Goal (SDG) 8 (Decent Work and Economic Growth) by showing how GenAI can enable flexible, meaningful work when aligned with human needs. Finally, the study contributes methodologically by demonstrating how computational techniques, when paired with interpretive theory lenses, can support scalable theory-building from large-scale user reviews. By grounding the analysis in user-generated data and theory, this research contributes to both academic understanding and practical guidance for implementing GenAI in ways that support human agency and mitigate associated risks.
Advertisement
2 Background
2.1 Use of GenAI Applications in the Workplace
The increasing integration of GenAI into organizational settings is transforming how work is performed. Unlike earlier forms of automation or general-purpose AI models, GenAI models, particularly large language models (LLMs) like ChatGPT, introduce qualitatively different capabilities. These models are built on deep learning architectures such as the Transformer and trained on vast amounts of textual data using self-supervised learning (Goodfellow et al., 2014; LeCun et al., 2015). It enables them to learn linguistic patterns, semantics and contextual relationships. ChatGPT, based on OpenAI’s GPT architecture, utilizes billions of parameters to generate coherent and contextually appropriate responses to user prompts. It operates through token-based prediction, where each word or subword token is generated sequentially by estimating the probability distribution over the vocabulary, conditioned on previous tokens (Feuerriegel et al., 2024; Goodfellow et al., 2014). Modern LLMs are fine-tuned using techniques such as reinforcement learning from human feedback to align their outputs with human preferences and ethical guidelines (Ouyang et al., 2022). This general-purpose design enables ChatGPT to perform diverse language tasks such as summarization, translation, question answering and creative writing. These models often do not require task-specific training. Their effectiveness comes from their ability to generalize knowledge learned during training. GenAI tools generate human-like content, handle complex queries and engage in adaptive communication. These features position GenAI not only as a support tool but as a potential collaborator or “AI teammate” in the workplace (Flathmann et al., 2023; Siemon, 2022; Ulfert et al., 2024). Prior research related to human-AI interaction highlights foundational factors such as task-technology fit, system quality, and user trust for effective collaboration at the workplace (Goodhue & Thompson, 1995). These studies underscore the importance of clarity of purpose, adequate user skills, and contextual alignment in driving adoption (Figueroa-Armijos et al., 2023; Gkinko & Elbanna, 2023; Hasija & Esper, 2022). Emerging GenAI-specific research emphasizes its potential to enhance innovation, productivity, and job autonomy (Brynjolfsson et al., 2025; Kong et al., 2024), while simultaneously raising concerns about hallucinations, algorithmic bias, accountability, transparency, and intellectual property (Anthony et al., 2023; Ooi et al., 2023). Moreover, GenAI introduces unique socio-cognitive demands by increasing the need for new skills, creating job crafting opportunities, and sometimes contributing to cognitive overload (Hai et al., 2025; Ritz et al., 2025). Psychological barriers such as algorithmic aversion also persist, especially among experts (Wang et al., 2024). From an organizational perspective, GenAI may function either as an “algorithmic cage” that constrains autonomy (Bankins et al., 2024) or as an “algorithmic colleague” that enhances decision-making, depending on the surrounding culture. In team settings, GenAI systems are increasingly framed as potential collaborators, though their acceptance depends on being perceived as competent yet limited in scope, rather than omniscient authorities (Maedche et al., 2019). Prior research in crisis-mapping demonstrates that volunteers construct plausible interpretations from sparse cues (Valecha et al., 2025). Similarly, employees in contemporary workplaces now engage in joint human-AI sensemaking, where LLM outputs meaningfully influence how tasks, problems, and decisions are framed. This interpretive interdependence amplifies both the opportunities and the risks associated with GenAI adoption.
The use of GenAI in knowledge work also raises persistent challenges around data privacy and organizational governance. While these systems can analyze and synthesize structured and unstructured data (e.g., documents, emails, video transcripts), they may expose sensitive knowledge to external parties via feedback loops (Benbya et al., 2024). This has led firms such as Samsung and JP Morgan to restrict GenAI use internally (Mok Aaron, 2023; Ray Siladitya, 2023). At the individual level, adoption patterns vary by gender, expertise, and professional context (Bankins et al., 2024; Kolbjørnsrud, 2024). For example, in healthcare, GenAI is seen as having the potential to support professionals by improving diagnostic accuracy and enabling self-assessment through mobile technologies (Kolbjørnsrud, 2024). However, even when GenAI is perceived to improve performance, its adoption depends on whether users believe the technology is easy to understand and operate (Chatterjee et al., 2021). Academic literature highlights that the successful use of GenAI in the workplace depends on more than its technical capabilities. Human factors, such as trust, cognitive load, organizational culture, and perceived usefulness, play a critical role in shaping outcomes (Cheng et al., 2017). Theoretical perspectives such as hybrid intelligence and human-AI symbiosis suggest that combining human intuition and emotional intelligence with AI’s computational power can produce superior outcomes (Bartelheimer et al., 2025; Siemon, 2022). In conclusion, we present a summary of recent research, along with the methodologies and theoretical frameworks used in academic literature, to offer insights into the evolving future of work enabled by technology (see Table A1 in the Appendix).
2.2 Sensemaking and Sensegiving Framework
The concepts of sensemaking and sensegiving offer a valuable lens for understanding how employees interpret and adapt to the introduction of the latest technology and tools like GenAI in the workplace. Sensemaking, as introduced by Weick (1995), refers to the process through which individuals construct meaning when facing novel, uncertain or complex situations. It involves interpreting cues, forming mental models, and acting based on those interpretations. This process occurs not only at the individual level but also at group and organizational levels (Weick, 2005). In organizations, sensemaking helps employees understand how new technologies, like GenAI, fit into their roles and workflows. Sensegiving, on the other hand, is the intentional effort by individuals, leaders, managers, or system designers to influence how others understand a situation or technology (Gioia & Chittipeddi, 1991). It involves sharing visions, framing new initiatives and guiding others toward desired interpretations. In the context of GenAI adoption, sensegiving plays a key role in shaping how employees perceive the usefulness, reliability and purpose of AI teammates. Together, sensemaking and sensegiving form a dynamic, cycle through activities such as envisioning, signaling, revisioning, and energizing (Gioia & Chittipeddi, 1991). For example, during technology implementation, users continuously revise their understanding based on new experiences and leadership narratives (Vaara et al., 2016). GenAI systems are inherently equivocal; their outputs can be interpreted in multiple ways, making sensemaking essential (Weick, 1995). Technological design features, such as categorization mechanisms, structure collective sensemaking by improving information quality (Valecha et al., 2025). Applying this framework allows us to explain how users continuously adjust their expectations of GenAI tools like ChatGPT, while managers and designers shape these interpretations through training, narratives, and system affordances.
This aligns with information systems research showing that technology sensemaking helps users develop beliefs and practices around new tools (Mesgari & Okoli, 2019; Urquhart et al., 2024). Managers support this through sensegiving practices such as training, storytelling and vision-setting (Tong et al., 2015). Applying the sensemaking-sensegiving framework enables a deeper understanding of how individuals and organizations co-construct meaning during GenAI adoption. This perspective highlights that human-AI collaboration is not only technical but also social and interpretive, evolving through communication, experience, and reflection. Building on this, we identified several stages and factors shaping sensemaking and sensegiving in user engagement with GenAI. While sensemaking and sensegiving offer a strong overarching framework, the stages often manifest through specific psychological and technological constructs. To capture these nuances, we draw on established theories such as Task-Technology Fit (TTF), IT Mindfulness, Social Influence Theory (SIT), Adaptive Structuration Theory (AST), and IS Continuance Theory. These theories provide validated constructs that link abstract sensemaking and sensegiving dynamics to measurable factors. For example, Task-Technology Fit and Perceived Usefulness help to operationalize the cognitive evaluation of how well GenAI aligns with individual goals, while constructs such as Trust and AI Mindfulness illuminate the emotional and behavioral adaptations that shape subsequent sensegiving. Table 1 summarizes these constructs and definitions, adapted from the theoretical foundations.
Table 1
Connotations of constructs with their theoretical Frameworks, Definitions, and sources
Factors
Theoretical Lens
Improved Operational Definition
Key Source(s)
Envision
Sensemaking Theory
The initial stage of sensemaking, where users evaluate the potential of AI tools to enhance task effectiveness and learning outcomes.
Examines how organizational memory (declarative, procedural, and emotional) facilitates the development of technology sensemaking capability, which in turn impacts innovation. Sensemaking is operationalized through the lens of memory and environmental turbulence.
Investigates how senior managers use sensemaking and sensegiving across multiple acquisition projects to create shared meaning and align program goals. Emphasizes the sequencing and interdependence of sensemaking across milestones.
Identifies 12 sensemaking patterns used by professionals to evaluate virtual world technologies. Highlights rhetorical structures, including confirmation and demographic factors, in constructing organizational value.
Explores strategic and operational sensegiving in telehealth services, showing how structured sensegiving templates guide service providers and consumers to reduce ambiguity and emotional ambivalence in technology use.
Analyzes the micro-processes of sensemaking and sensegiving around implementing Intellectual Capital (IC) measurement systems. Demonstrates how different styles (guided, fragmented) lead to varied organizational outcomes, and emphasizes the role of leadership.
Explores how managers construct and adapt technological frames during foresight workshops. Identifies cycles of perceived sensegiving and sensebreaking, and the reversion to initial frames due to organizational inertia.
Proposes an ecological approach to IT sensemaking that integrates user action, material artefacts, and affordances. Moves beyond socio-cognitive models to emphasize real-actual-empirical stratification.
Proposes the Affordance-Based IT Sensemaking (ABITS) framework, where sensemaking is shaped by users’ perceptions of IT affordances and their actualization. Focuses on the ontological configuration between affordances and user adaptation.
Proposes a process model where decision-makers and data analysts engage in alternating cycles of sensemaking and sensegiving while interacting with business analytics tools. Study highlights episodic interactions and reciprocal shaping of meaning.
Shows collective prospective sensemaking in regulatory construction using abstraction and elaboration to reconcile diverse actor perspectives and build a shared understanding of emerging technologies.
Investigates how organizational actors use metaphors (person, robot, tool) to engage in sensemaking and deliberate sensegiving during RPA implementation. Demonstrates how metaphors shape roles and work practices.
Synthesizes various conceptualizations and uses of sensemaking across information science, education, and organizational studies. Differentiates Dervin’s SMM and Weickian organizational sensemaking.
Demonstrates how individuals use dating spreadsheets to make sense of relational data through processes of problematizing, tailoring, and sensemaking, restoring control over digital intimacy.
Develops a client-vendor technology selection sensemaking model featuring immediate, retrospective, and decision cycles. Introduces sense-breaking as a result of disruptive, conflicting vendor signals.
This study
Utilizes Sensemaking and Sensegiving Theory to develop a holistic understanding of how GenAI-based tools like ChatGPT are perceived, interpreted, and communicated in real-world work contexts. This perspective offers insights into the socio-cognitive processes through which employees make sense of ChatGPT’s usefulness, adapt it into their workflows and engage in communicative acts that influence others’ perceptions. The theoretical foundation highlights how meaning is constructed, revised and propagated across user communities, especially as these technologies evolve.
Integrating these constructs ensures theoretical rigor, enables finer-grained explanations of user engagement, and fits the sensemaking-sensegiving framework within the broader IS literature. In conclusion, we draw on prior studies to demonstrate the diverse applications of the theory in explaining technology use across contexts. Additionally, we show how the theory has been applied in this study to generate insights into how GenAI technologies are shaping the future of work (see Table 2).
3 Methodology
This study addresses the stated research objectives by analyzing publicly available user reviews of the ChatGPT mobile application. Adopting an interpretivist epistemological stance, it views meaning as constructed through users’ interactions with technology (Becker & Niehaves, 2007). A design science approach is applied to explore how individuals interpret and derive meaning from GenAI in work contexts. These emergent themes are then interpreted through the theoretical lenses of sensemaking and sensegiving. We employed a Natural Language Processing (NLP) based text analytics algorithm to derive insights from these reviews based on a computationally intensive theory-building framework. Through text mining, key factors shaping user experiences were identified and their relationships are explained. This abductive and computational - qualitative approach effectively integrates exploratory approaches, facilitating the extraction of data-driven insights. Previous studies by Kar (2021) and Kar and Kushwaha (2021) have similarly analyzed user-generated content to examine user experiences. Figure 1 illustrates the consolidated research design employed in this study.
This study utilizes a combination of content analysis and text summarization techniques. The content analysis process begins with data cleaning and preprocessing, followed by the application of clustering methods to group theoretically relevant lexicons, enabling an initial exploration of key constructs. Text summarization is conducted through topic modelling, which helps identify thematic patterns within the textual dataset. The clustering results further reinforce and contextualize the insights generated through topic modeling. These emergent themes are then aligned with relevant constructs drawn from the guiding theoretical framework.
3.1 Data Collection
This study has extracted public reviews of GenAI-based teammate applications, ChatGPT, from the Google Play Store. Among the range of GenAI tools increasingly integrated into workplace settings, including Microsoft Copilot, Google Gemini, Grammarly, and GitHub Copilot, this study focuses on ChatGPT due to its general-purpose design, high accessibility and widespread adoption. ChatGPT was the most downloaded app worldwide in April 2025, with 52 million downloads (Backlinko, 2024). As a large language model, ChatGPT enables users to generate human-like text, code, images, and reasoning-based outputs, and can even suggest follow-up prompts in real time. It operates across multiple platforms: mobile, desktop, and via API and is being progressively embedded into enterprise software systems. These features make ChatGPT a highly versatile digital assistant, applicable across various roles and industries. Also, the mobile version of ChatGPT has been downloaded millions of times globally, providing a rich and diverse dataset for exploring real-world engagement with GenAI in professional contexts. The Google Play Store was selected over the Apple App Store due to the significantly larger user base of Android devices. As of 2024, Android commands a global market share of 72% with over 3 billion active users, while iPhone holds 28% market share with a total of 1.382 billion consumers (Backlinko, 2024; Webb Maria & Pankratyeva Alexandra, 2024). This selection allows for the incorporation of a more diverse range of perspectives. Reviews represent a potential source of data, capturing public perceptions regarding the nuances and artifacts of the ecosystem. The “Google_Play_Scraper”, a Python package, was utilized to extract reviews from the ChatGPT Android applications. A total of 887,567 reviews were collected for these applications up to May 31, 2025.
Fig. 1
Research methodology flowchart adopted in this study
Following the extraction of user reviews, we implemented NLP-based techniques to refine the reviews and establish the theoretical lexicons (Kar & Dwivedi, 2020). Subsequent to the elimination of duplicate reviews and those lacking textual content, our dataset comprised 443,338 reviews. We conducted a comprehensive text pre-processing process, which involved removing special characters, numbers, regular expressions and standard stop words. Generic stop word lists commonly used in studies often fail to account for domain-specific language (Alshanik et al., 2020). Techniques used in prior studies to define corpus-specific stopwords, such as author-based keywords from a database or frequency pruning (Baradad & Mugabushaka, 2015), are less suitable for user-generated reviews. Scopus terms rarely appear in informal user texts, while frequency thresholds risk removing semantically meaningful words. Hence, we created a custom stop word list including corpus-specific terms such as “chatgpt,”, ‘chat_gpt’, “app,” “application,” “AI,” “OpenAI,”, “android”, “tool”, “version”, “software”, “chatbot”, “update”, ”iOS”, “mobile” and similar frequently occurring but uninformative words. Removing these terms helped improve the quality of the analysis by reducing noise and focusing on contextually meaningful content. Subsequently, the text was tokenized into individual words and we applied Named Entity Recognition (NER) and part-of-speech filtering were then applied using the spaCy library to retain meaningful entities and linguistically significant terms before lemmatization. The tokens were then lemmatized, and bigram and trigram models from Gensim were employed to finalize the lexicons for analysis. Effective text preprocessing is critical in text mining, as it reduces dimensionality and enhances computational efficiency.
4 Exploratory Study
In the previous section, we outlined the procedures used for data collection and preprocessing, including the extraction of theoretical lexicons from the processed textual dataset. This section focuses on analyzing the text data using clustering techniques along with a Latent Dirichlet Allocation (LDA)-based topic modeling approach to interpret the extracted lexicons and align them with relevant theoretical constructs. For example, one of the user reviews states:
“Whether I need help drafting emails, brainstorming ideas, understanding complex topics, or even writing code, this app delivers consistently accurate and helpful responses.”
After preprocessing (tokenization, stopword removal, and lemmatization), key tokens such as “drafting emails,” “brainstorming ideas,” “writing code,” “understanding complex topics,” and “consistently accurate responses” were extracted. Topic modeling and clustering grouped these into the broader theme of productivity and usefulness, which was then aligned with constructs such as Task-Technology Fit (support for diverse work-related tasks) and Perceived Usefulness (reliable and helpful outputs). These constructs were subsequently mapped to the processes of sensemaking and sensegiving, illustrating how users interpret ChatGPT as a dependable tool while simultaneously shaping others’ expectations through their reviews.
4.1 Content Analysis
Textual data can be summarized using a variety of analytical methods. Prior studies have utilized principal component analysis for such summarization (Yadav et al., 2023). Topic modeling is widely recognized as a suitable method (Kar & Dwivedi, 2020; Miranda et al., 2022). Additionally, clustering techniques (see Fig. 3) can be employed to complement topic modeling by verifying and enriching the interpretation of the derived topics (Eryilmaz et al., 2022).
Following tokenization and lemmatization, we applied a clustering approach that organizes lexicons based on semantic similarity. Specifically, we used a combination of K-means and hierarchical clustering. K-means helped determine the optimal number of clusters (refer to Fig. 2), which was then used to guide hierarchical clustering (Fig. 3). The lexicons were progressively grouped into broader clusters, supporting the development of preliminary theoretical insights based on lexical similarity.
We employed Latent Dirichlet Allocation (LDA) to extract theoretical lexicons by identifying patterns in word co-occurrence and probabilistic distributions across the corpus (Blei et al., 2003). In recent years, advanced approaches such as Non-negative Matrix Factorization (NMF) (Vangara et al., 2021), BERTopic (Xu et al., 2022), and embedding-based models using Doc2Vec or transformer architectures have gained popularity (Budiarto et al., 2021). While these methods produce semantically rich outputs, they often lack transparency, are difficult to replicate, and pose challenges for theory-driven interpretation which are an essential focus in Information Systems research.
In contrast, LDA is one of the most widely used probabilistic topic models, offering interpretable topic-word distributions that align closely with theory-driven inquiry (Hannigan et al., 2019). This approach enabled us to identify underlying latent themes within the textual data. To determine the optimal number of topics, we tested values ranging from 1 to 40 and selected four topics based on the highest observed coherence score of 0.5397. To enhance rigor, we complemented LDA with k-means and hierarchical clustering, ensuring a balance between robustness, transparency and interpretability. Finally, we conducted a detailed analysis of each topic to identify the primary themes and derive relevant subtopics.
Integrating clustering with topic modeling offers a cohesive analytical strategy to summarize large volumes of text and identify theoretically relevant lexicons. The preliminary groupings generated through similarity-based clustering support the interpretation of latent themes extracted via topic modeling. However, the thematic outputs require systematic alignment with the conceptual constructs outlined in Table 2. To ensure theoretical rigor, we assessed the semantic correspondence between topic keywords and the items within established constructs (Basu et al., 2024). Prominent keywords associated with each factor are illustrated in Fig. 4 and Figure A4 in appendix through word clouds, while the complete mapping of topics to theoretical constructs is detailed in Appendix Table A2 and Table A3. The wordcloud library in Python is used to visualize textual data by generating frequency-based graphical representations, where word size reflects its relative importance in the corpus.
The hierarchical clustering output is visualized in Fig. 3, where lexicons are grouped based on semantic proximity. For instance, terms such as “highly_versatile,” “timely_response,” and “save_time_efforts” appear within the same cluster. However, clustering alone cannot identify hidden thematic structures or estimate the topic prevalence across individual reviews. To address this, we utilized an LDA-based topic modeling technique. Figure 4 presents key themes and associated keywords as identified by this model. For example, a cluster featuring terms like “accurate,” “outstanding,” and “exceptional” reflects users’ perceptions of GenAI’s reliability and trustworthiness. This process was repeated to identify all relevant factors.Drawing from the topic modeling outcomes and guided by Sensemaking and Sensegiving theories, the study introduces the Envision-Evolve-Engage (3E) framework as shown in Fig. 5. This model conceptualizes user engagement with GenAI applications (e.g., ChatGPT) as a dynamic process structured around two mechanisms:
Sensemaking: Internal cognitive framing by the user based on their experience and expectations.
Sensegiving: External communication that frames AI use for others or reinforces certain perspectives.
User Engagement is an outcome measure that reflects the success of AI adoption and usability, their continued usage, satisfaction and recommendation to others. It can be measured using the rating score of the review.
Fig. 4
Word cloud Visualization of relevant keywords from topic modelling mapped to theoretical factors
Based on the study’s findings, we map the four activities of sensemaking and sensegiving: envisioning, signaling, revisioning, and energizing onto the Envision-Evolve-Engage (3E) framework. This integration connects micro-level user actions with the broader process of technology adoption. Envisioning corresponds to the envision stage, where users articulate potential applications of ChatGPT (e.g., drafting emails, brainstorming ideas). Within the Evolve stage, signaling occurs when users share experiences that shape others’ expectations, while revisioning emerges in reviews where users iteratively reinterpret the tool’s capabilities in light of successes or limitations. Finally, energizing aligns with the Engage stage, where user’s express enthusiasm, satisfaction, and motivation for continued use, often encouraging others to adopt the tool. A detailed explanation of each construct is provided below.
Fig. 5
Envision-Evolve-Engage (3E) conceptual framework for user engagement
In the envisioning phase of sensemaking, users begin to form expectations about how GenAI tools such as ChatGPT can support their work-related activities. These expectations are shaped by three interrelated components: perceived usefulness, cognitive framing and task characteristics. Collectively, these elements guide how users interpret the relevance of GenAI to their roles and signal these interpretations to others, thereby fostering shared understanding and trust (Gioia & Chittipeddi, 1991).
Lexical patterns observed in user reviews, such as learn, study, easy, and love suggest that users perceive ChatGPT as a cognitive aid that facilitates learning and self-development. These expressions indicate the cognitive framing, wherein individuals mentally project how AI can enhance their knowledge acquisition and future capabilities (Weick, 1995). Additional terms such as helpful, understand, recommend and answer_question further reflect this framing, portraying ChatGPT not merely as a tool but as an assistive knowledge partner. This perspective explains GenAI’s role in reducing cognitive load by offering explanations and enabling complex problem-solving. Moreover, expressions like useful, good, amazing and great align with the construct of perceived usefulness (Davis, 1989), highlighting users’ positive assessments of ChatGPT’s ability to enhance learning outcomes, task performance and overall efficiency. For instance, users frequently noted that the tool “saves hours of work” or “improves the quality of reports.” These observations are consistent with the technology acceptance literature, which emphasizes perceived usefulness as a key driver of early adoption. The frequent appearance of task-related terms such as help, work, ask, study and problem also demonstrate the various task characteristics, indicating that users evaluate ChatGPT instrumentally, based on its ability to support task completion and workplace productivity (Goodhue & Thompson, 1995). Some reviews emphasized this task focus explicitly, with comments like “great for drafting emails” or “very effective at solving coding problems.”
Together, these lexical patterns suggest empirical themes such as: (i) users view GenAI as a cognitive aid for learning and self-development, (ii) they perceive it as useful for enhancing efficiency and performance, and (iii) they evaluate it in terms of its ability to support diverse work tasks. It highlights that envisioning is not abstract but grounded in observable user experiences and evaluations, which provide a foundation for theorizing about early adoption mechanisms. Envisioning involves forming mental models of GenAI’s utility, alignment with task requirements and contribution to personal and professional growth. Design features embedded in digital systems can structure interpretive processes by shaping how information is organized and evaluated, thereby supporting collective sensemaking (Valecha et al., 2025). These early frames lay the foundation for ongoing sensegiving and sustained engagement. As users communicate their evaluations through signaling behaviors, they contribute to the development of shared meaning, credibility and eventual integration of GenAI into work routines. Based on this, we propose:
P1: In the envisioning phase, users’ initial expectations of GenAI tools in the workplace are shaped by perceived usefulness, cognitive framing and task characteristics. These expectations encourage users to share their understanding with other users through signaling mechanisms that build trust and credibility.
4.2.2 Signaling
During the signaling phase of sensegiving, users actively shape others’ perceptions of GenAI tools like ChatGPT by sharing evaluative judgments and usage experiences. This process extends beyond individual cognition, engaging social and technological mechanisms that foster shared understanding. Our analysis reveals four interrelated components central to this signaling dynamic: trust and credibility, peer signaling, technological affordances and value-based motivation. Together, these elements influence how users assess the relevance of GenAI, prompting them to revisit and refine prior sensemaking efforts (Gioia & Chittipeddi, 1991; Weick, 1995) and deepening their engagement with the technology.
Trust and Credibility emerged as key outcomes of user evaluations. Words such as accurate, outstanding, and exceptional reflect more than satisfaction. They signal the tool’s reliability and the user’s endorsement of its performance. These expressions communicate trustworthiness and function as persuasive cues for others. This aligns with IS literature suggesting that trust is foundational for technology acceptance and its continued use, particularly in digitally mediated, complex work environments (Gkinko & Elbanna, 2023; Hoehle et al., 2015; Lukyanenko et al., 2022). Peer Signaling was evident in terms like save, team, error and question, which suggest socially anchored reflections. Rather than isolated experiences, users’ reviews demonstrate an orientation toward collective utility and validation. Such peer signaling conveys social proof and encourages behavioral modeling within organizational settings. These cues guide others in interpreting GenAI’s value through the lens of team relevance and shared application.
Technological Affordances refer to the system capabilities that users perceive as enabling action. Words such as plugin, editing and provide_information demonstrate how users frame GenAI as a functional tool rather than an abstract innovation. These references help others envision practical applications and potential integration into work routines. Affordance-based signaling thus supports knowledge transfer and aligns with prior work emphasizing the role of perceived usefulness in shaping technology adoption (Strong et al., 2025). Value-Based Motivation was reflected in emotionally salient terms like hope, amazing, worth, thank and future. These expressions signal affective commitment, aspirational alignment and intrinsic value. Beyond utility, users emphasize the emotional resonance and future-oriented benefits of GenAI. This supports research showing that emotional and value-based drivers, such as enjoyment and meaningfulness, can enhance sustained technology use.
Together, user reviews reveal four recurring signaling patterns: (i) trust and credibility reflected in endorsements of accuracy and reliability, (ii) peer signaling that emphasizes team relevance and shared utility, (iii) references to technological affordances that highlight functional features, and (iv) value-based motivations expressed through affective and aspirational language. These dimensions reveal how users construct and share evolving understandings of GenAI in the workplace. First, they prompt sensemaking revisions through signals drawn from trust, peer validation, affordances, and value alignment. Second, they reinforce engagement by embedding the tool into daily workflows, shaping sustained use. Through signaling, users do not merely narrate their own experiences but actively shape the interpretive context for others, fostering broader learning and adoption across the organization. Based on this, we propose:
P2: During the signaling phase, trust and credibility, peer signaling, value-based motivation and technological affordances influence users’ evaluation of GenAI’s relevance,
a)
prompting them to revisit and refine their understanding.
b)
drive users to engage with GenAI tools, integrating them into their work routines and reinforcing the perceived value of continued use.
4.2.3 Revisioning
Revisioning represents a critical phase in the sensemaking process, wherein users iteratively reinterpret the value and role of GenAI tools in workplace contexts. Unlike the earlier stages of envisioning or signaling, which are marked by anticipation and advocacy, revisioning is characterized by adaptive engagement informed by lived experience. In our analysis of user reviews, terms such as “response,” “update,” “try,” “review,” and “option” highlight ongoing evaluation and the recalibration of understanding. These expressions suggest that users are not merely responding to system outputs; rather, they are actively reconstructing their interpretive frames based on new insights and outcomes. Sub-topic modeling of the data identified four core factors contributing to the revisioning process: adaptive structuration, AI mindfulness, task-technology fit and problem-solution reframing.
Drawing on AST, we find that users dynamically shape their interactions with GenAI systems such as ChatGPT. Terms like “try,” “ask,” and “response” reflect patterns of iterative experimentation, where users refine their usage practices based on system feedback. This suggests that GenAI is not adopted in a rigid or static way; instead, users integrate it into workflows through trial, improvisation and contextual alignment. For instance, users report that the tool “finally” helped resolve a long-standing issue, indicating a shift in their framing of GenAI’s utility. Over time, users revise norms, expectations and interaction routines as they become more familiar with the system.
Based on IT mindfulness defined by Bennett Thatcher et al. (2018), we extend it to GenAI application use by employees and define AI mindfulness as users’ deliberate, ethical and context-sensitive engagement with AI systems. While IT mindfulness emphasizes attentive and flexible interaction with information technologies in general, AI mindfulness adds further dimensions of evaluating generated content, considering ethical implications, and accounting for contextual appropriateness. This distinction is critical because GenAI systems produce probabilistic, non-deterministic outputs rather than fixed functionalities, requiring users to engage in more active judgment (Brynjolfsson et al., 2025; Feuerriegel et al., 2024). In our analysis, AI mindfulness emerged as a salient factor shaping revisioning. Lexical cues such as “review,” “conversation,” “recommend,” “helpful,” and “experience” reflect a thoughtful and evaluative stance. Rather than accepting automation outputs uncritically, mindful users assess GenAI responses for quality, contextual relevance and appropriateness before acting on them. This behavior reflects heightened cognitive engagement and awareness of both the system’s affordances and limitations. AI mindfulness mediates between emotional regulation and trust formation by enabling users to manage affective responses to GenAI outputs, engage in reflective evaluation, and gradually develop calibrated trust grounded in experiential evidence and ethical awareness. Also, such interactions enhance epistemic alignment between human and AI collaborators. In this way, AI mindfulness promotes a more ethically grounded and resilient form of sensemaking.
Revisioning is also influenced by users’ assessments of task-technology fit. Terms such as “fast,” “human,” “smart,” and “easy_use” suggest positive perceptions of GenAI’s alignment with task demands and cognitive workflows. These evaluations are especially salient in organizational settings where efficiency, responsiveness, and problem-solving are prioritized. When users describe the tool as “super helpful” or “amazingly fast at providing answers,” they are not only expressing satisfaction but also reinforcing a rationale for continued integration. In this sense, task-technology fit functions both as a motivator and as a validation mechanism in users’ evolving sensemaking. A final dimension of revisioning involves problem-solution reframing, in which users redefine workplace challenges as solvable through AI-enabled support. Terms such as “fix,” “issue,” “help_lot,” “problem,” and “update” indicate that GenAI tools are not merely viewed as assistive utilities but as collaborative problem-solvers. This reframing reflects user resilience and optimism, transforming experiences of inefficiency or ambiguity into opportunities for action. Over time, GenAI systems shift from being seen as external tools to embedded partners in problem-solving. This transformation encourages both emotional and instrumental ownership of technological outcomes (Bansal et al., 2019).
During this stage, users do more than revise earlier assumptions. They actively construct new interpretive frames shaped by lived experience, peer feedback, system responsiveness and emotional outcomes. These empirical findings illustrate that revisioning is anchored in four user-driven patterns: (i) continuous experimentation, where users engage in iterative cycles of trying, asking, and refining responses; (ii) reflective evaluation, as seen in mindful review and recommendation of outputs to ensure contextual appropriateness; (iii) functional alignment, reflected in positive assessments of speed, helpfulness, and ease of use that signal fit with task demands; and (iv) problem reframing, where users reinterpret challenges such as issues or inefficiencies as solvable through GenAI-enabled support. This highlights that GenAI use in the workplace is not static but evolves through iterative meaning-making, adaptive learning and reflective engagement. It also underscores the recursive nature of technological sensemaking, where user cognition and system affordances co-evolve within situated contexts. Based on these insights, we propose the following:
P3: In the revisioning phase, users adapt their mental models through AI mindfulness, task-technology fit, adaptive structuration, and problem-solution reframing. This cognitive shift motivates energizing responses such as highlighting social influence, valuing innovative design, and recognizing high information quality.
4.2.4 Energizing
The energizing dimension of sensegiving captures how users construct and transmit emotionally resonant, socially persuasive and experientially grounded narratives that elevate the perceived value of GenAI tools in the workplace. This sensegiving process is shaped by four interrelated elements: affective commitment, social influence, innovative design and information quality. They collectively influence users’ motivational framing and promote organizational adoption.
Affective commitment refers to an individual’s emotional attachment and identification with a tool or technology, which often fosters sustained use and advocacy (Solinger et al., 2008). In our dataset, users frequently employed positive valenced and emotionally expressive language such as “love,” “amazing,” “fantastic,” “thank_team,” and “life-changing.” For example, one user shared, “Damn, this app is powerful. I didn’t really know that much about AI software but I had it rewrite and reformat my resume and it was awesome… I also had it write an email… and it was impressive.” Another user remarked, “It is a powerful weapon for knowledge gaining. So easy, understandable.” These affective expressions extend beyond satisfaction; they reflect a deeper sense of enthusiasm and gratitude that energizes continued use and influences others. Such emotional framing aligns with Information Systems (IS) research emphasizing the role of affective cues in shaping influence within organizational technology diffusion.
The energizing role of social influence surfaced through collective language and peer encouragement in user reviews, including statements like “It’s very helpful for everyone… please guys use it, I hope you may love it,” and “Whether it’s managing projects, setting goals, or collaborating with team members, ChatGPT has become my go-to solution.” Such expressions illustrate how users function as informal sensegivers, offering social validation and reinforcing trust in GenAI tools. Energizing sensegiving, in this sense, is enacted through peer modeling, institutional trust and shared professional practices, thereby cultivating a community of use. These social cues not only validate the relevance of the tool but also reduce uncertainty for new or hesitant users, thus enhancing adoption at scale.
Users also highlighted specific design features that enhanced their experience, frequently referencing terms such as “voice,” “speak,” “write,” and “interface.” One review noted, “I love the voice function,” while another praised, “Good UI, good performance, easy to review conversations started on desktop (and vice versa) … a well-designed app.” These comments highlight how human-centered and innovative design fosters energizing sensegiving by shaping a positive experiential narrative. Design appreciation contributes not only to instrumental value but also to symbolic and emotional engagement. Moreover, when users propose improvements, e.g., “…but features like images and automatic image designer must be initiated”, it demonstrates user engagement in co-shaping the design narrative, which further deepens psychological ownership (Tsai & ho, 2013).
Information quality emerged as a core energizing factor, as users frequently praised GenAI outputs using descriptors like “correct,” “clear,” “helpful,” “easy to understand,” and “excellent answers.” Given the challenge of hallucinations in GenAI systems (Mikalef et al., 2022), perceived information quality significantly influences continued engagement. As one user noted, “I use it more than Google search these days,” while another claimed, “ChatGPT has entirely supplanted Google in my daily searches.” These evaluations demonstrate users’ trust in GenAI for reliable content generation and decision support. IS research has consistently shown that perceived information quality affects trust, satisfaction and system use (DeLone & McLean, 1992). Energizing sensegiving is thus accomplished when users amplify the perceived accuracy and responsiveness of GenAI, strengthening its perceived value and legitimacy in workplace settings.
Together, these four factors illustrate how energizing sensegiving operates as both a motivational and social mechanism. Users emotionally frame GenAI as a transformative and engaging tool (affective commitment), legitimize its value through peer modeling and shared use (social influence), highlight experiential value through human-centered design (innovative design), and reinforce utility through high-quality outputs (information quality). These sensegiving practices foster shared interpretations and help solidify GenAI’s role in organizational routines. Based on this analysis, we propose:
P4: Energizing factors such as affective commitment, social influence, innovative design and quality of information drive users to engage more deeply with GenAI tools, integrating them into work routines and reinforcing perceived value.
4.2.5 Feedback Loop from User Engagement
Our analysis reveals that user engagement with GenAI applications is not a linear, sequential process of discrete stages. Instead, it reflects a recursive and overlapping cycle of sensemaking and sensegiving, where user interactions with the technology actively reshape earlier cognitive and behavioral patterns. These interactions create reflexive feedback loops that iteratively modify users’ goals, expectations and interpretations of system utility.
User engagement not only reinforces but also transforms future-oriented cognitive framing, specifically, the envisioning of task possibilities and technological capabilities. While traditional information systems (IS) adoption models typically frame envisioning as a pre-engagement activity (e.g., initial goal setting or expectation formation), our findings suggest that envisioning is dynamically updated post-engagement through ongoing interaction and reinterpretation. Drawing on the sensemaking framework, envisioning is not merely a front-loaded or managerial act but a continuous process of vision refinement grounded in interpretive user experience. For example, consider a content manager in a marketing department who initially adopts ChatGPT to assist with routine email drafting. As they experiment with the tool, they discover its utility for brainstorming campaign ideas, generating SEO-optimized copy and crafting content in different emotional tones. This expanded understanding of GenAI’s affordances leads them to revise their mental model of what the tool can support. In line with Weick (1995) enactment perspective, the user does not simply “adopt” a technology but enacts its value through exploratory action. This process also aligns with TTF, as the user continuously evaluates and adapts the technology’s use to align with evolving and emergent task requirements. Thus, envisioning shifts from a static, initial intention to a dynamic reassessment of desirable, feasible and meaningful tasks enabled by GenAI.
A second reflexive loop was also identified, wherein user engagement catalyzes revisioning, that is, the reinterpretation and adaptive realignment of previously framed tasks and strategies. This loop reflects Weick (1995) argument that sensemaking is driven by plausibility rather than accuracy, allowing users to reinterpret prior choices and adapt their actions based on feedback from GenAI tools. In this loop, users do not merely update their future goals; they actively revisit and reconfigure their past and ongoing approaches to task execution. For instance, a data analyst may initially rely on GenAI for basic code generation or documentation summarization. However, through deeper engagement, they discover ways to leverage the tool for exploring data visualizations, debugging scripts, or communicating insights to non-technical stakeholders. This leads the analyst to reassess not just the tool’s capabilities, but also their own role and strategy within the human-AI collaboration. Such reinterpretation embodies “adaptive alignment”, a term that captures how users reframe their understanding of both tasks and self as they learn with the system (Liang et al., 2017). Accordingly, we propose:
P5(a): As users repeatedly engage with GenAI tools, their evolving expectations and accumulated insights inform iterative envisioning, enabling them to redefine goals, anticipate affordances and reframe GenAI’s role in supporting complex tasks.
P5(b): As users interact with GenAI tools and accumulate experiential knowledge, their feedback informs future signaling and revisioning by improving trust mechanisms and refining technological affordances, sustaining a dynamic co-evolution between users and GenAI systems.
Together, these feedback loops reinforce the idea that GenAI use is an evolving and reflexive process. Engagement is not merely an endpoint of adoption but a recursive driver of cognitive, emotional and strategic transformation. These findings extend traditional IS models by placing iterative sensemaking and sensegiving at the core of user-GenAI interaction, offering important implications for the design of adaptive interfaces, trust calibration mechanisms and personalization strategies that evolve alongside user engagement.
5 Discussion
GenAI tools such as ChatGPT represent a paradigm shift in how work is conceptualized, enacted and evaluated in the digital era. Our findings suggest that when users actively envision the potential of GenAI tools, evolve through iterative adaptation and engage meaningfully with them, they co-create novel digital work practices and meaning-making processes. In this sense, AI is positioned not merely as a technical artifact but as a collaborative teammate that reshapes task execution, fosters affective commitment, and enables emergent pathways for innovation. The Envision-Evolve-Engage (3E) framework proposed in this study captures the recursive sensemaking and sensegiving processes through which users and GenAI tools mutually shape adoption, trust formation and sustained integration within organizational routines. User interaction emerges as a cyclical process rather than a linear adoption sequence. Envisioning reflects cognitive assessments of usefulness and expectations for work transformation. Signalling initiates sensegiving, as users share cues such as peer recommendations, system prompts, and organizational messages that shape collective beliefs. Revisioning represents recursive reinterpretation, guided by AI mindfulness, adaptive structuration, and evaluations of task-technology fit, allowing users to refine both task understanding and perceptions of the technology. Finally, Energizing reflects motivational alignment, where affective commitment, peer influence, and high-quality outputs sustain long-term engagement and integration. By mapping these four core activities to the 3E framework, the study explains how micro-level user actions aggregate into broader adoption patterns highlighting the causal relationships between stages and the distinctive contribution of each construct. A summary of these insights is presented in Appendix Table A5, which provides a structured representation of how individuals co-construct meaning, trust, and value in workplace GenAI adoption.Through this framework, the study offers multiple contributions to theory and practice, as outlined in the following sections.
5.1 Theoretical Implications
This study offers several theoretical and methodological contributions to the academic literature. First, the study abductively derives the 3E framework (Envision-Evolve-Engage) from large-scale user data. It shows how sensemaking and sensegiving processes unfold in the context of GenAI as an ongoing cycle of meaning construction. Users not only interpret the capabilities and limitations of GenAI for personal cognitive framing but also participate in sensegiving activities such as sharing usage experiences, recommending strategies and signaling credibility. These activities shape the perceptions and adoption behaviors of their peers. In doing so, the study advances framing theory (Davidson, 2002) and signaling theory (Connelly et al., 2011) by positioning users as both interpreters and communicators of AI-related signals, thereby influencing how technological affordances are socially constructed and understood in organizational contexts. To further highlight the novelty of the 3E framework, we provide a comparative table (see Appendix Table A6) with traditional technology adoption models (e.g., The Unified Theory of Acceptance and Use of Technology (UTAUT), TTF, Information system success model (ISS). Whereas prior models conceptualize adoption largely as a linear process of cognitive evaluation (e.g., perceived usefulness, ease of use, behavioral intention), the 3E framework foregrounds recursive cycles of envisioning, evolving and engaging. This positions adoption as a dynamic socio-cognitive process that integrates affective, ethical and communal dimensions alongside individual cognition.
Second, the study contributes to the literature on Human-AI Interaction by exploring the social and emotional dynamics that characterize users’ engagement with GenAI tools. While prior research has largely focused on task efficiency and performance outcomes, our findings reveal that user interactions with GenAI are deeply shaped by emotional resonance, peer influence and evolving community norms. Drawing on AST, we show that GenAI tools are not simply technical artifacts but become socially embedded systems, adopted through processes of interpretation, experimentation and collaborative learning. Users develop affective commitment to these tools, using them not only for task completion but also for affirmation, uncertainty reduction and co-construction of new norms of technology use. These insights broaden existing understandings of human-AI collaboration by offering a holistic view of how human actors adapt, co-evolve and assign meaning to AI-enabled systems over time. Moreover, the study advances the future of work discourse by illustrating how GenAI adoption fosters unlearning, relearning and the cultivation of new skillsets (Hönigsberg et al., 2025). Factors such as AI mindfulness, trust, technological affordances and task-technology fit emerge as critical to sustained GenAI use, highlighting the need for adaptive human capabilities and socio-cognitive restructuring in AI-mediated work environments. AI mindfulness is more context-sensitive. It involves deliberate, ethical, and reflective engagement with GenAI, where users assess responses for accuracy, contextual appropriateness, and social implications before acting upon them. This distinction highlights the unique role of ethical awareness and adaptive evaluation in AI-mediated work.
Finally, this study contributes methodologically by illustrating how computational techniques can facilitate inductive theory development. Leveraging large-scale user reviews as a form of electronic word-of-mouth (eWoM), we applied NLP methods to identify salient lexical patterns related to GenAI use. These patterns were clustered using topic modeling and other clustering approaches to explore emergent constructs and relational themes. This approach responds to recent calls for computationally grounded theorizing (Berente et al., 2019, 2025) and demonstrates how text analytics can yield socially embedded insights that reflect the lived experiences of users. Through this method, the study bridges qualitative richness with computational rigor, offering a scalable pathway for theorizing from digitally native data.
5.2 Practical Implications
The findings of this study provide several actionable insights for organizational leaders, system designers and policy-makers engaged in the integration of GenAI in the workplace. First, organizations should recognize employees as co-creators in shaping GenAI use and outcomes. Beyond technical fluency, targeted training that develops interpretive and sensemaking skills can enable employees to engage more meaningfully with GenAI systems. Embedding continuous feedback loops, such as employee evaluations of AI-generated outputs (e.g., draft performance reviews) can foster legitimacy, ownership, and trust. Designing for both cognitive and emotional user responses can further reduce resistance and identity threats, thereby supporting sustained engagement. Moreover, organizational learning mechanisms, including AI literacy programs, peer mentoring initiatives, and reflective training workshops, can facilitate the integration of the proposed 3E framework into everyday work practices. Such initiatives enable employees to internalize the interpretive, ethical, and collaborative dimensions of GenAI use, allowing organizations to institutionalize adaptive learning and responsible innovation. This participatory approach aligns with emerging calls for human-centric and relational AI governance frameworks (Janssen, 2025). Second, organizations should reframe the role of GenAI from a tool of automation to one of augmentation. Positioning GenAI as a collaborative partner strengthens employee creativity, decision-making and higher-order problem solving. For instance, a marketing team may use GenAI to generate campaign drafts, while managers guide critical evaluation of AI suggestions and their integration into final strategies. System designers can reinforce this by embedding affordances that promote transparency and user agency. Features such as “explanation prompts” that clarify how outputs are produced and annotation tools that allow employees to critique responses can enhance sensemaking and align system evolution with organizational needs. Such framing fosters psychological empowerment and supports role redefinition, reducing resistance often associated with technological change. Third, responsible GenAI deployment requires inclusive governance strategies that address ethical risks and access inequalities. Policy frameworks should prioritize equitable onboarding, scaffolded learning, and clear guidelines around data use, bias and algorithmic transparency. For instance, organizations rolling out enterprise-level GenAI systems could design tiered access and structured onboarding tailored to different professional roles. This should ensure that frontline workers and knowledge professionals benefits equitably from AI augmentation. Regular explainability audits and incident reporting systems for problematic or biased outputs further ensure accountability and prevent digital marginalization. As GenAI tools like ChatGPT increasingly shape knowledge work, proactive regulation is essential to ensure the safeguarding of fairness, inclusivity, and organizational integrity. Collectively, these implications emphasize the importance of aligning technological affordances with social, ethical and cognitive dimensions of employee engagement to harness the transformative potential of GenAI in the future of work.
5.3 Implications for SDG 8: Decent Work and Economic Growth
This study contributes to advancing SDG 8, which promotes inclusive economic growth, productive employment and decent work for all (United Nations, 2015). GenAI tools such as ChatGPT can enhance workplace accessibility by offering real-time task assistance, knowledge support and productivity enhancement. These capabilities help users navigate complex tasks more efficiently and foster broader participation among individuals with varying levels of digital fluency. The time saved through GenAI-enabled efficiencies can be reallocated to high-value tasks or personal pursuits, thereby supporting improved work-life integration and employee well-being. Through the Envision-Evolve-Engage (3E) framework, this research highlights GenAI’s role in enabling continuous learning, adaptive decision-making, and skill enhancement, aligning closely with SDG 8’s emphasis on lifelong upskilling and sustainable employment. The proposed framework supports “decent work” by encouraging organizations to “envision” inclusive roles for GenAI, “evolve” job designs that balance augmentation with human agency, and “engage” employees in participatory learning pathways. GenAI-generated task recommendations can be assessed and prioritized by team members, reinforcing decision-making capacity and equitable participation. Such practices help ensure that technology adoption strengthens fairness, inclusivity, and long-term employability. However, to ensure that these benefits are equitably distributed, ethical oversight and inclusive implementation practices are essential. Without such safeguards, GenAI may inadvertently reinforce digital divides or foster over-reliance on algorithmic outputs. Organizations and policymakers must therefore prioritize participatory governance, responsible AI design and equitable access to GenAI tools. Importantly, this study highlights that GenAI not only enhances task performance but also supports psychological well-being by reducing cognitive load, increasing perceived autonomy and reinforcing users’ sense of control over work processes. When integrated thoughtfully, GenAI functions as a collaborative augmentation tool that enhances human capability rather than replacing it. While concerns over job displacement persist, our findings suggest that GenAI can enable more meaningful work as an AI Teammate by reshaping job roles, fostering creativity and reinforcing the need for ongoing reskilling. The proposed 3E framework offers a systematic foundation for national and institutional AI policy formulation by integrating principles of ethical accountability, participatory governance, and sustained human development within GenAI adoption processes. This shift toward augmentation-oriented adoption models and forward-looking policy design can help ensure that technological advancement translates into equitable growth, resilient labor markets, and the realization of “decent work for all” as envisioned under SDG 8.
5.4 Limitations and Future Research Directions
This study provides valuable insights related to the use of GenAI in the workplace by analyzing user reviews of ChatGPT. However, several limitations must be acknowledged. First, the data were collected solely from the Google Play Store. Although this source offers authentic user experiences, it may not capture the full diversity of user interactions. Future research should consider including data from other platforms, such as the Apple App Store, as well as additional sources like social media and grey literature to improve data diversity and reliability. Second, the study focuses on a single GenAI application, namely ChatGPT. While ChatGPT is widely adopted and reflects broader trends in GenAI usage, its application spans personal, educational, and professional contexts. Such diversity provides valuable insights into generalized user behaviors, but some reviews may not represent workplace use. To ensure the relevance of the dataset for workplace analysis, reviews were categorized using keywords related to work, productivity, task completion, and professional collaboration. Subsequently, a subset of reviews was manually examined to validate the dataset. Findings showed that the majority of reviews, even if general in wording, provided insights applicable to workplace contexts. In addition, reliance on a single tool may limit the generalizability of the findings. Future research could employ stratified sampling to distinguish workplace-specific reviews from broader usage contexts and extend the proposed model across other GenAI platforms, including enterprise-level and specialized applications. Comparative research involving tools such as Samsung SDS FabriX and Amazon Q could reveal how organizational scale, customization, and data governance influence GenAI adoption and effectiveness. Third, the study is based on self-reported public reviews. Reliance on user-generated reviews may introduce biases, as such reviews often reflect extreme experiences - either highly positive or highly negative and may also be influenced by recall bias or subjective interpretation. While these insights highlight critical affordances and barriers, they risk skewing interpretations. Future research could strengthen findings by employing longitudinal designs and objective behavioral data (e.g., system usage logs, multimodal interaction patterns) to track how human-AI interactions evolve. Fourth, cultural and occupational differences in GenAI use remain underexplored. Users in different professional roles, such as educators, frontline workers or creative professionals, may engage with GenAI tools in diverse ways. Studying these variations would help develop more inclusive and generalizable insights. Finally, the study highlights the need for further research into the long-term social and ethical implications of GenAI integration. Issues such as overtrust, emotional reliance and the potential erosion of human skills require critical attention. Future information systems research should examine these risks through socio-ethical frameworks and complement exploratory insights with confirmatory quantitative methods to support responsible GenAI adoption.
6 Conclusion
This study introduces the Envision-Evolve-Engage (3E) model, which explains how employees progress from initial expectations, through adaptive learning, to active engagement with GenAI systems. The model positions users as active co-creators rather than passive adopters, emphasizing the recursive interplay between human agency and GenAI capabilities. A central insight is the role of AI mindfulness, which shapes how employees interpret, adapt and responsibly integrate GenAI into work practices. This highlights the need to remain attentive to both the capabilities and limitations of AI in decision-making, collaboration, and knowledge work. Theoretically, the study extends adoption models by incorporating emotional, ethical, and contextual dimensions of human-AI interaction. Practically, it offers guidance for designing socially responsive AI systems that foster trust, meaningful engagement, and inclusive participation. Importantly, the findings illustrate how GenAI can advance SDG 8 (Decent Work and Economic Growth) by supporting inclusive, creative, and future-ready employment. By enabling employees to focus on higher-order tasks, creativity, and lifelong learning, GenAI contributes to more sustainable work practices. This study contributes to that vision by offering a grounded, user-informed framework for responsible GenAI integration in the workplace. Mechanisms such as participatory training, feedback-driven refinements, and equitable onboarding can help organizations harness GenAI’s transformative potential while safeguarding ethical and inclusive practices. Collectively, these contributions deepen understanding of how GenAI is interpreted, adopted, and sustained in organizations and provide a foundation for future IS research in an AI-mediated world.
Acknowledgements
None.
Declarations
Ethics Approval and Consent to Participate
Not applicable.
Ethics Declarations
None.
Consent for Publication
Not applicable.
Research Involving Human and Animal Rights
This article does not contain any studies by the authors involving human participants or animals.
Generative AI Use Disclaimer
Generative AI was used to improve language clarity, grammar and expression during manuscript preparation. All intellectual content, analysis, and final interpretations were developed and verified by the authors.
Competing Interests
The authors declare that they have no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Ayushi Agarwal
is a Research Scholar in the Information Systems area at the Indian Institute of Management Kozhikode (IIMK), India, and will serve as the Vice President of the AIS Doctoral Student College in 2026. She has over a decade of industry experience spanning artificial intelligence, data analytics, cybersecurity, automation, corporate training, and human resource management. She began her career as a Software Engineer at Adobe, followed by her role at Skiify, where she contributed to organizational growth and innovation. She has worked with the All India Council for Technical Education (AICTE) and the Ministry of Education (MoE), Government of India, as a technology mentor for startups, and has supported national initiatives such as the YUKTI Innovation Challenge 2025, NIRF 2022 (National Institutional Ranking Framework), and ARIIA 2021 (Atal Ranking of Institutions on Innovation Achievements). Her research examines disruptive technologies, the ethical and societal implications of emerging digital systems, cybersecurity, and AI integration in the workplace to advance organizational culture, employee well-being, and sustainability. She served as the Co-Chair of the Doctoral Student Corner at PACIS 2025 and remains an active contributor to the Information Systems community through her work as a reviewer, volunteer, and presenter at leading conferences and journals. She is committed to promoting the responsible use of technology and contributing to the development of a more equitable and human-centered digital future.
M P Sebastian
has been a Professor of Information Systems at the Indian Institute of Management Kozhikode for more than a decade and currently serves as the institute’s Dean (Faculty Administration & Development). He received his master’s and Ph.D. degrees from the Indian Institute of Science, Bangalore, and conducts research on Cybersecurity, AI, Machine Learning, Text Analytics, Healthcare Information Systems, and Enterprise Information Systems. His scholarly work appears in journals such as Journal of Retailing and Consumer Services, Information Systems Frontiers, Information Technology and People, Information and Computer Security, Health Policy and Technology, Computers and Electrical Engineering, Knowledge and Information Systems, and Evolving Systems.
Satish Krishnan Ph.D.
in Information Systems from the National University of Singapore, is a Professor at the Indian Institute of Management (IIM) Kozhikode. His research explores IT resistance, fake news and disinformation, the gender gap, e-government, e-business, virtual social networks, technostress, cyberloafing, and cyberbullying. He has published in top-tier journals such as Journal of Applied Psychology, Organizational Behavior and Human Decision Processes, Journal of Business Ethics, and Information & Management. An editorial board member for leading journals, including Internet Research and Technological Forecasting and Social Change, he also holds key roles at conferences like PACIS and ICIS. He has received multiple accolades, including Outstanding Associate Editor awards at ICIS, Best Reviewer at PACIS, and Best Paper awards at AIMS and ICMIS. In recognition of his contributions to management research, he was honored with the 2022 Outstanding Young Management Researcher Award by the Association of Indian Management Scholars.
Akgün, A. E., Keskin, H., Byrne, J. C., & Lynn, G. S. (2014). Antecedents and consequences of organizations’ technology sensemaking capability. Technological Forecasting and Social Change,88, 216–231. https://doi.org/10.1016/J.TECHFORE.2014.07.002CrossRef
Alshanik, F., Apon, A., Herzog, A., Safro, I., & Sybrandt, J. (2020). Accelerating text mining using Domain-Specific stop word lists. Proceedings – 2020 IEEE International Conference on Big Data Big Data 2020, 2639–2648. https://doi.org/10.1109/BIGDATA50022.2020.9378226
Amankwah-Amoah, J., Abdalla, S., Mogaji, E., Elbanna, A., & Dwivedi, Y. K. (2024). The impending disruption of creative industries by generative AI: Opportunities, challenges, and research agenda. International Journal of Information Management, 79, 102759. https://doi.org/10.1016/J.IJINFOMGT.2024.102759CrossRef
Anthony, C., Bechky, B. A., & Fayard, A. L. (2023). Collaborating” with AI: Taking a system view to explore the future of work. Organization Science,34(5), 1672–1694. https://doi.org/10.1287/ORSC.2022.1651CrossRef
Bankins, S., Ocampo, A. C., Marrone, M., Restubog, S. L. D., & Woo, S. E. (2024). A multilevel review of artificial intelligence in organizations: Implications for organizational behavior research and practice. Journal of Organizational Behavior, 45(2), 159–182. https://doi.org/10.1002/JOB.2735CrossRef
Bansal, A., King, D. R., & Meglio, O. (2022). Acquisitions as programs: The role of sensemaking and sensegiving. International Journal of Project Management, 40(3), 278–289. https://doi.org/10.1016/J.IJPROMAN.2022.03.006CrossRef
Bansal, G., Nushi, B., Kamar, E., Lasecki, W. S., Weld, D. S., & Horvitz, E. (2019). Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7, 2–11. https://doi.org/10.1609/HCOMP.V7I1.5285CrossRef
Bartelheimer, C., Heinz, D., Hönigsberg, S., Siemon, D., Li, M. M., Strohmann, T., Poeppelbuss, J., & Peters, C. (2025). Conceptualizing hybrid intelligent service ecosystems. Electronic Markets,35(1), 1–27. https://doi.org/10.1007/S12525-025-00798-4CrossRef
Basu, B., Sebastian, M. P., & Kar, A. K. (2024). What affects the promoting intention of mobile banking services? Insights from mining consumer reviews. Journal of Retailing and Consumer Services, 77, 103695. https://doi.org/10.1016/J.JRETCONSER.2023.103695CrossRef
Becker, J., & Niehaves, B. (2007). Epistemological perspectives on IS research: A framework for analysing and systematizing epistemological assumptions. Information Systems Journal, 17(2), 197–214. https://doi.org/10.1111/J.1365-2575.2007.00234.XCrossRef
Benbya, H., Strich, F., & Tamm, T. (2024). Navigating generative artificial intelligence promises and perils for knowledge and creative work. Journal of the Association for Information Systems, 25(1), 23–36. https://doi.org/10.17705/1jais.00861CrossRef
Bennett Thatcher, J., T. Wright, R., Sun, H., J. Zagenczyk, T., & Klein, R. (2018). Mindfulness in information technology use: Definitions, distinctions, and a new measure1. MIS Quarterly,42(3), 831–847. https://doi.org/10.25300/MISQ/2018/11881CrossRef
Berente, N., Hansen, S., Pike, J. C., & Bateman, P. J. (2011). Arguing the value of virtual worlds: Patterns of discursive sensemaking of an innovative technology. MIS Quarterly: Management Information Systems, 35(3), 685–709. https://doi.org/10.2307/23042804CrossRef
Berente, N., Seidel, S., & Safadi, H. (2019). Data-driven computationally intensive theory development. Information Systems Research, 30(1), 50–64. https://doi.org/10.1287/ISRE.2018.0774CrossRef
Berente, N., Lindberg, A., Miranda, S. M., Safadi, H., & Seidel, S. (2025). Computationally Intensive Theory Construction. The Routledge Companion to Management Information Systems (pp. 14–27). https://doi.org/10.4324/9781032690483-4
Blei, D. M., Ng, A. Y., & Jordan, M. I. (2003). Latent dirichlet allocation. Journal of Machine Learning Research,3(Jan), 993–1022.
Braganza, A., Chen, W., Canhoto, A., & Sap, S. (2021). Productive employment and decent work: The impact of AI adoption on psychological contracts, job engagement and employee trust. Journal of Business Research, 131, 485–494. https://doi.org/10.1016/J.JBUSRES.2020.08.018CrossRef
Budiarto, A., Rahutomo, R., Putra, H. N., Cenggoro, T. W., Kacamarga, M. F., & Pardamean, B. (2021). Unsupervised news topic modelling with Doc2Vec and spherical clustering. Procedia Computer Science, 179, 40–46. https://doi.org/10.1016/J.PROCS.2020.12.007CrossRef
Chatterjee, S., Rana, N. P., Dwivedi, Y. K., & Baabdullah, A. M. (2021). Understanding AI adoption in manufacturing and production firms using an integrated TAM-TOE model. Technological Forecasting and Social Change, 170, 120880. https://doi.org/10.1016/J.TECHFORE.2021.120880CrossRef
Cheng, X., Fu, S., & de Vreede, G. J. (2017). Understanding trust influencing factors in social media communication: A qualitative study. International Journal of Information Management, 37(2), 25–35. https://doi.org/10.1016/J.IJINFOMGT.2016.11.009CrossRef
Cheng, X., Zhang, X., Cohen, J., & Mou, J. (2022). Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms. Information Processing & Management, 59(3), 102940. https://doi.org/10.1016/J.IPM.2022.102940CrossRef
Connelly, B. L., Certo, S. T., Ireland, R. D., & Reutzel, C. R. (2011). Signaling theory: A review and assessment. Journal of Management, 37(1), 39–67. https://doi.org/10.1177/0149206310388419CrossRef
Daskalopoulou, A., Jefferies, G., J., & Skandalis, A. (2020). Transforming technology-mediated health-care services through strategic sense-giving. Journal of Services Marketing, 34(7), 909–920. https://doi.org/10.1108/JSM-11-2019-0452/FULL/PDFCrossRef
Davidson, E. J. (2002). Technology frames and framing: A socio-cognitive investigation of requirements determination. MIS Quarterly: Management Information Systems, 26(4), 329–358. https://doi.org/10.2307/4132312CrossRef
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly. https://doi.org/10.2307/249008CrossRef
DeLone, W. H., & McLean, E. R. (1992). Information systems success: The quest for the dependent variable. Information Systems Research, 3(1), 60–95. https://doi.org/10.1287/ISRE.3.1.60CrossRef
DeLone, W. H., & McLean, E. R. (2003). The DeLone and McLean model of information systems success: A ten-year update. Journal of Management Information Systems, 19(4), 9–30. https://doi.org/10.1080/07421222.2003.11045748CrossRef
DeSanctis, G., & Poole, M. S. (1994). Capturing the complexity in advanced technology use: Adaptive structuration theory. Organization Science, 5(2), 121–147. https://doi.org/10.1287/ORSC.5.2.121CrossRef
Dwivedi, Y. K., Pandey, N., Currie, W., & Micu, A. (2024). Leveraging ChatGPT and other generative artificial intelligence (AI)-based applications in the hospitality and tourism industry: Practices, challenges and research agenda. International Journal of Contemporary Hospitality Management,36(1), 1–12. https://doi.org/10.1108/IJCHM-05-2023-0686/FULL/PDFCrossRef
Eryilmaz, E., Thoms, B., & Ahmed, Z. (2022). Cluster analysis in online learning communities: A text mining approach. Communications of the Association for Information Systems, 51(1), 41. https://doi.org/10.17705/1CAIS.05132CrossRef
Figueroa-Armijos, M., Clark, B. B., & da Veiga, M., S. P (2023). Ethical perceptions of AI in hiring and organizational trust: The role of performance expectancy and social influence. Journal of Business Ethics, 186(1), 179–197. https://doi.org/10.1007/s10551-022-05166-2CrossRef
Flathmann, C., Schelble, B. G., Rosopa, P. J., McNeese, N. J., Mallick, R., & Madathil, K. C. (2023). Examining the impact of varying levels of AI teammate influence on human-AI teams. International Journal of Human-Computer Studies, 177, 103061. https://doi.org/10.1016/J.IJHCS.2023.103061CrossRef
Fügener, A., Grahl, J., Gupta, A., & Ketter, W. (2021). Will humans-in-the-loop become borgs? Merits and pitfalls of working with AI (Open Access). MIS Quarterly,45(3b), 1527–1556. https://doi.org/10.25300/MISQ/2021/16553CrossRef
Gioia, D. A., & Chittipeddi, K. (1991). Sensemaking and sensegiving in strategic change initiation. Strategic Management Journal, 12(6), 433–448. https://doi.org/10.1002/SMJ.4250120604CrossRef
Giuliani, M. (2016). Sensemaking, sensegiving and sensebreaking: The case of intellectual capital measurements. Journal of Intellectual Capital,17(2), 218–237. https://doi.org/10.1108/JIC-04-2015-0039CrossRef
Gkinko, L., & Elbanna, A. (2023). Designing trust: The formation of employees’ trust in conversational AI in the digital workplace. Journal of Business Research, 158, 113707.CrossRef
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial Nets. Advances in Neural Information Processing Systems, 27.
Goodhue, D. L., & Thompson, R. L. (1995). Task-technology fit and individual performance. MIS Quarterly: Management Information Systems,19(2), 213–233. https://doi.org/10.2307/249689CrossRef
Hai, S., Long, T., Honora, A., Japutra, A., & Guo, T. (2025). The dark side of employee-generative AI collaboration in the workplace: An investigation on work alienation and employee expediency. International Journal of Information Management, 83, 102905. https://doi.org/10.1016/J.IJINFOMGT.2025.102905CrossRef
Hannigan, T. R., Haan, R. F. J., Vakili, K., Tchalian, H., Glaser, V. L., Wang, M. S., Kaplan, S., & Jennings, P. D. (2019). Topic Modeling in Management Research: Rendering New Theory from Textual Data. Https://Doi.Org/https://doi.org/10.5465/Annals.2017.0099, 13(2), 586–632. https://doi.org/10.5465/ANNALS.2017.0099.
Hasija, A., & Esper, T. L. (2022). In artificial intelligence (AI) we trust: A qualitative investigation of AI technology acceptance. Journal of Business Logistics, 43(3), 388–412.CrossRef
Hitsuwari, J., Ueda, Y., Yun, W., & Nomura, M. (2023). Does human-AI collaboration lead to more creative art? Aesthetic evaluation of human-made and AI-generated haiku poetry. Computers in Human Behavior,139, 107502. https://doi.org/10.1016/J.CHB.2022.107502CrossRef
Hönigsberg, S., Watkowski, L., & Drechsler, A. (2025). Generative artificial intelligence in higher education: Mediating learning for literacy development. Communications of the Association for Information Systems, 56(1), 1044–1076. https://doi.org/10.17705/1CAIS.05640CrossRef
Huang, M. H., Rust, R., & Maksimovic, V. (2019). The feeling economy: Managing in the next generation of artificial intelligence (AI). California Management Review, 61(4), 43–65.CrossRef
Hui, X., Reshef, O., & Zhou, L. (2024). The short-term effects of generative artificial intelligence on employment: Evidence from an online labor market. Organization Science,35(6), 1977–1989. https://doi.org/10.1287/ORSC.2023.18441CrossRef
Janssen, M. (2025). Responsible governance of generative AI: Conceptualizing GenAI as complex adaptive systems. Policy and Society, 44(1), 38–51. https://doi.org/10.1093/POLSOC/PUAE040CrossRef
Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586. https://doi.org/10.1016/J.BUSHOR.2018.03.007CrossRef
Jia, N., Luo, X., Fang, Z., & Liao, C. (2024). When and how artificial intelligence augments employee creativity. Academy of Management Journal, 67(1), 5–32. https://doi.org/10.5465/AMJ.2022.0426CrossRef
Kar, A. K. (2021). What affects usage satisfaction in mobile payments? Modelling user generated content to develop the “Digital Service Usage Satisfaction Model.” Information Systems Frontiers,23(5), 1341–1361. https://doi.org/10.1007/S10796-020-10045-0/FIGURES/6CrossRef
Kar, A. K., & Dwivedi, Y. K. (2020). Theory building with big data-driven research - Moving away from the “What” towards the “Why.” International Journal of Information Management,54, Article 102205. https://doi.org/10.1016/J.IJINFOMGT.2020.102205CrossRef
Kar, A. K., & Kushwaha, A. K. (2021). Facilitators and barriers of artificial intelligence adoption in business - Insights from opinions using big data analytics. Information Systems Frontiers,25(4), 1351–1374. https://doi.org/10.1007/S10796-021-10219-4CrossRef
Kelman, H. C. (1958). Compliance, identification, and internalization three processes of attitude change. Journal of Conflict Resolution, 2(1), 51–60. https://doi.org/10.1177/002200275800200106CrossRef
Kemp, A. (2024). Competitive advantage through artificial intelligence: Toward a theory of situated AI. Academy of Management Review, 49(3), 618–635. https://doi.org/10.5465/AMR.2020.0205CrossRef
Kim, H. W., Chan, H. C., & Gupta, S. (2007). Value-based adoption of mobile internet: An empirical investigation. Decision Support Systems, 43(1), 111–126. https://doi.org/10.1016/J.DSS.2005.05.009CrossRef
Klos, C., & Spieth, P. (2021). READY, STEADY, DIGITAL?! How foresight activities do (NOT) affect individual technological frames for managerial SENSEMAKING. Technological Forecasting and Social Change, 163, 120428. https://doi.org/10.1016/J.TECHFORE.2020.120428CrossRef
Kolbjørnsrud, V. (2024). Designing the intelligent organization: Six principles for Human-AI collaboration. California Management Review, 66(2), 44–64. https://doi.org/10.1177/00081256231211020CrossRef
Kong, H., Yin, Z., Chon, K., Yuan, Y., & Yu, J. (2024). How does artificial intelligence (AI) enhance hospitality employee innovation? The roles of exploration, AI trust, and proactive personality. Journal of Hospitality Marketing & Management, 33(3), 261–287. https://doi.org/10.1080/19368623.2023.2258116CrossRef
Kumar, A., Bhattacharyya, S. S., & Krishnamoorthy, B. (2023). Automation-augmentation paradox in organizational artificial intelligence technology deployment capabilities; An empirical investigation for achieving simultaneous economic and social benefits. Journal of Enterprise Information Management,36(6), 1556–1582. https://doi.org/10.1108/JEIM-09-2022-0307/FULL/PDFCrossRef
Langer, M., & Landers, R. N. (2021). The future of artificial intelligence at work: A review on effects of decision automation and augmentation on workers targeted by algorithms and third-party observers. Computers in Human Behavior, 123, 106878. https://doi.org/10.1016/J.CHB.2021.106878CrossRef
Lebovitz, S., Levina, N., & Lifshitz-Assaf, H. (2021). Is AI ground truth really true? The dangers of training and evaluating AI tools based on experts’ know-what. MIS Quarterly, 45(3), 1501–1526. https://doi.org/10.25300/MISQ/2021/16564CrossRef
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436–444.CrossRef
Liang, H., Wang, N., Xue, Y., & Ge, S. (2017). Unraveling the alignment paradox: How does business-IT alignment shape organizational agility? Information Systems Research, 28(4), 863–879. https://doi.org/10.1287/ISRE.2017.0711CrossRef
Lukyanenko, R., Maass, W., & Storey, V. C. (2022). Trust in artificial intelligence: From a foundational trust framework to emerging research opportunities. Electronic Markets 2022, 32(4), 1993–2020. https://doi.org/10.1007/S12525-022-00605-4CrossRef
Maedche, A., Legner, C., Benlian, A., Berger, B., Gimpel, H., Hess, T., Hinz, O., Morana, S., & Söllner, M. (2019). AI-Based Digital Assistants: Opportunities, Threats, and Research Perspectives. Business & Information Systems Engineering,61(4), 535–544. https://doi.org/10.1007/S12599-019-00600-8/FIGURES/2CrossRef
Mesgari, M., & Okoli, C. (2019). Critical review of organisation-technology sensemaking: Towards technology materiality, discovery, and action. European Journal of Information Systems, 28(2), 205–232. https://doi.org/10.1080/0960085X.2018.1524420CrossRef
Mesgari, A., Okoli, C., & de Ortiz, A. (2025). Affordance-Based information technology sensemaking [ABITS]. Business and Information Systems Engineering, 1–18. https://doi.org/10.1007/S12599-025-00924-8
Mikalef, P., Conboy, K., Lundström, J. E., & Popovič, A. (2022). Thinking responsibly about responsible AI and ‘the dark side’of AI. In European Journal of Information Systems (Vol. 31, Issue 3, pp. 257–268). Taylor & Francis.
Miller, R. L. (2015). Rogers’ Innovation Diffusion Theory (1962, 1995). Information Seeking Behavior and Technology Adoption: Theories and Trends. IGI Global Scientific Publishing. https://doi.org/10.4018/978-1-4666-8156-9.CH016
Miranda, S., Berente, N., Seidel, S., Safadi, H., & Burton-Jones, A. (2022). Editor’s comments: Computationally intensive theory construction: A primer for authors and reviewers. MIS Quarterly, 46(2), iii–xviii.CrossRef
Namvar, M., Im, G. P., Li, J. C., & Chung, C. (2024). Data-driven sensegiving and sensemaking: A phenomenological investigation. Information Technology and People, ahead-of-print(ahead-of-print). https://doi.org/10.1108/ITP-05-2023-0452/FULL/PDF
Nguyen, M., Trinh, N., Mehrotra, A., & Basahel, S. (2025). Generative AI in the workplace: How employee experiences influence work outcomes? Journal of Enterprise Information Management. https://doi.org/10.1108/JEIM-11-2024-0637/FULL/PDF., ahead-of-print(ahead-of-print).CrossRef
Oliver, R. L. (1980). A cognitive model of the antecedents and consequences of satisfaction decisions. Journal of Marketing Research, 17(4), 460–469.CrossRef
Ooi, K. B., Tan, G. W. H., Al-Emran, M., Al-Sharafi, M. A., Capatina, A., Chakraborty, A., Dwivedi, Y. K., Huang, T. L., Kar, A. K., Lee, V. H., Loh, X. M., Micu, A., Mikalef, P., Mogaji, E., Pandey, N., Raman, R., Rana, N. P., Sarker, P., Sharma, A., & Wong, L. W. (2023). The potential of generative artificial intelligence across disciplines: Perspectives and future directions. Journal of Computer Information Systems, 65(1), 76–107. https://doi.org/10.1080/08874417.2023.2261010CrossRef
Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P. F., Leike, J., & Lowe, R. (2022). Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems,35, 27730–27744.
Peretz-Andersson, E., Tabares, S., Mikalef, P., & Parida, V. (2024). Artificial intelligence implementation in manufacturing smes: A resource orchestration approach. International Journal of Information Management, 77, 102781. https://doi.org/10.1016/J.IJINFOMGT.2024.102781CrossRef
Ramaul, L., Ritala, P., & Ruokonen, M. (2024). Creational and conversational AI affordances: How the new breed of chatbots is revolutionizing knowledge industries. Business Horizons, 67(5), 615–627. https://doi.org/10.1016/J.BUSHOR.2024.05.006CrossRef
Robertson, J., Ferreira, C., Botha, E., & Oosthuizen, K. (2024). Game changers: A generative AI prompt protocol to enhance human-AI knowledge co-construction. Business Horizons, 67(5), 499–510. https://doi.org/10.1016/J.BUSHOR.2024.04.008CrossRef
Seidel, S., Frick, C. J., & vom Brocke, J. (2025). Regulating emerging technologies: Prospective sensemaking through abstraction and elaboration. MIS Quarterly, 49(1), 179–204.CrossRef
Solinger, O. N., van Olffen, W., & Roe, R. A. (2008). Beyond the three-component model of organizational commitment. Journal of Applied Psychology,93(1), 70–83. https://doi.org/10.1037/0021-9010.93.1.70CrossRef
Sowa, K., Przegalinska, A., & Ciechanowski, L. (2021). Cobots in knowledge work: Human - AI collaboration in managerial professions. Journal of Business Research, 125, 135–142. https://doi.org/10.1016/J.JBUSRES.2020.11.038CrossRef
Strong, D. M., Treku, D. N., & Volkoff, O. (2025). Affordance Theory and How to Use It in IS Research (Revised). The Routledge Companion to Management Information Systems (pp. 164–178). https://doi.org/10.4324/9781032690483-16
Talwar, S., Srivastava, S., Sakashita, M., Islam, N., & Dhir, A. (2022). Personality and travel intentions during and after the COVID-19 pandemic: An artificial neural network (ANN) approach. Journal of Business Research, 142, 400–411. https://doi.org/10.1016/J.JBUSRES.2021.12.002CrossRef
Techatassanasoontorn, A. A., Waizenegger, L., & Doolin, B. (2023). When Harry, the human, met Sally, the software robot: Metaphorical sensemaking and sensegiving around an emergent digital technology. Journal of Information Technology,38(4), 416–441. https://doi.org/10.1177/02683962231157426CrossRef
Teng, R., Zhou, S., Zheng, W., & Ma, C. (2024). Artificial intelligence (AI) awareness and work withdrawal: Evaluating chained mediation through negative work-related rumination and emotional exhaustion. International Journal of Contemporary Hospitality Management,36(7), 2311–2326. https://doi.org/10.1108/IJCHM-02-2023-0240/FULL/PDFCrossRef
Tong, Y., Tan, S. S. L., & Teo, H. H. (2015). The road to early success: Impact of system use in the swift response phase. Information Systems Research, 26(2), 418–436. https://doi.org/10.1287/ISRE.2015.0578CrossRef
Ulfert, A. S., Georganta, E., Centeio Jorge, C., Mehrotra, S., & Tielman, M. (2024). Shaping a multidisciplinary understanding of team trust in human-AI teams: A theoretical framework. European Journal of Work and Organizational Psychology,33(2), 158–171. https://doi.org/10.1080/1359432X.2023.2200172CrossRef
United Nations. (2015). Goal 8 | department of economic and social affairs. United Nations Department of Economic and Social Affairs (UNDESA). https://sdgs.un.org/goals/goal8
Urquhart, C., Cheuk, B., Lam, L., & Snowden, D. (2024). Sense-making, sensemaking and sense making—A systematic review and meta-synthesis of literature in information science and education: An annual review of information science and technology (ARIST) paper. Journal of the Association for Information Science and Technology, 76(1), 3–97. https://doi.org/10.1002/ASI.24866CrossRef
Vaara, E., Sonenshein, S., & Boje, D. (2016). Narratives as sources of stability and change in organizations: Approaches and directions for future research. Academy of Management Annals, 10(1), 495–560. https://doi.org/10.5465/19416520.2016.1120963CrossRef
Valecha, R., Oh, O., & Rao, H. R. (2025). Design Principles for Information Categorization Quality in Crowdsourced Crisis Mapping Platforms. MIS Quarterly, 49(2), 777–804. https://doi.org/10.25300/MISQ/2024/16610
Vangara, R., Bhattarai, M., Skau, E., Chennupati, G., Djidjev, H., Tierney, T., Smith, J. P., Stanev, V. G., & Alexandrov, B. S. (2021). Finding the number of latent topics with semantic Non-Negative matrix factorization. IEEE Access,9, 117217–117231. https://doi.org/10.1109/ACCESS.2021.3106879CrossRef
Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. https://doi.org/10.1287/MNSC.46.2.186.11926CrossRef
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. https://doi.org/10.2307/30036540CrossRef
Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly,36(1), 157–178. https://doi.org/10.2307/41410412CrossRef
Wang, W., Gao, G., & Agarwal, R. (2024). Friend or foe? Teaming between artificial intelligence and workers with variation in experience. Management Science, 70(9), 5753–5775. https://doi.org/10.1287/mnsc.2021.00588CrossRef
Weick, K. E. (1995). Sensemaking in organizations (Vol. 3). Sage publications Thousand Oaks.
Weick, K. E. (2005). The experience of theorizing: Sensemaking as topic and resource. Great Minds in Management: The Process of Theory Development, 394–413.
Wood, R., & Bandura, A. (1989). Social cognitive theory of organizational management. Academy of Management Review, 14(3), 361–384.CrossRef
Xu, W. W., Tshimula, J. M., Dubé, È., Graham, J. E., Greyson, D., MacDonald, N. E., & Meyer, S. B. (2022). Unmasking the Twitter discourses on masks during the COVID-19 pandemic: User Cluster-Based BERT topic modeling approach. JMIR Infodemiology, 2(2), e41198. https://doi.org/10.2196/41198CrossRef
Yadav, J., Yadav, A., Misra, M., Zhou, J., & Rana, N. P. (2023). Role of Social Media in technology adoption for sustainable agriculture practices: Evidence from Twitter Analytics. Communications of the Association for Information Systems,52(1), 833–851. https://doi.org/10.17705/1CAIS.05240CrossRef
Yeow, A. Y. K., & Chua, C. E. H. (2020). Multiparty sensemaking: A technology-vendor selection case study. Information Systems Journal,30(2), 334–368. https://doi.org/10.1111/ISJ.12263CrossRef
Yin, M., Jiang, S., & Niu, X. (2024). Can AI really help? The double-edged sword effect of AI assistant on employees’ innovation behavior. Computers in Human Behavior, 150, 107987. https://doi.org/10.1016/J.CHB.2023.107987CrossRef