Finds documents with both search terms in any word order, permitting "n" words as a maximum distance between them. Best choose between 15 and 30 (e.g. NEAR(recruit, professionals, 20)).
Finds documents with the search term in word versions or composites. The asterisk * marks whether you wish them BEFORE, BEHIND, or BEFORE and BEHIND the search term (e.g. lightweight*, *lightweight, *lightweight*).
The advent of generative AI tools like ChatGPT has significantly transformed educational practices, prompting a shift from traditional learning methods to more interactive and personalized approaches. This article investigates how students are integrating AI into their academic routines, revealing that while AI tools are widely adopted, they have not entirely replaced conventional resources like search engines and multimedia platforms. The study highlights the multifaceted use of AI across various cognitive levels, as defined by Bloom's Taxonomy, with a particular emphasis on understanding and creating content. It also explores the nuances of student expertise in using AI, noting a tendency to overestimate proficiency and rely on informal learning sources. The implications for curriculum design and assessment are profound, suggesting the need for educators to incorporate AI tools thoughtfully, ensuring students develop critical thinking and domain expertise. The article concludes by discussing the broader impact of AI on student learning and the necessity for educators to adapt to these technological advancements to foster a more effective and inclusive educational environment.
AI Generated
This summary of the content was generated with the help of AI.
Abstract
This study investigates university students’ technology use and understanding in the post-generative AI (GenAI) era. A mixed-methods approach, employing a survey of 68 students at a business-focused university in the United States, explored current learning tool usage patterns, specific GenAI applications, self-perceived AI expertise, and knowledge acquisition methods. Findings reveal that while GenAI tools, especially ChatGPT, are frequently used, they have not supplanted traditional online resources like search engines. They are being used for learning and even entertainment, demonstrating a multifaceted integration of AI into university life. Notably, some emerging uses of GenAI extend into sensitive areas such as health-related inquiries, raising questions about appropriate boundaries and responsible use. Furthermore, we find discrepancies between students’ self-assessed AI expertise and their articulated understanding of it, suggesting potential overconfidence and highlighting the need for robust digital literacy programs. The limited awareness of GenAI tools beyond ChatGPT also underscores a potential gap in students’ technological toolkit while demonstrating that students mostly use GenAI tools to create and understand education-related material.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Introduction and Motivation
Artificial Intelligence (AI) researchers have developed applications for education for a long time, as evidenced by several journals, publications, and focused conferences on this topic. However, the introduction of Generative AI tools like ChatGPT has changed the landscape for two reasons. One, GenAI tools like ChatGPT, Claude, Gemini, etc. are broad tools that can perform several tasks across a vast spectrum of education-related activities, including writing assignments, grading assignments, analyzing data, etc. with non-trivial quality and incredible speed (Brown et al., 2020). This is in contrast to the pre-GenAI state of research that was focused on narrow tasks such as automatic assessments of some types of essays (Ghosh & Klebanov, 2024), Intelligent Tutoring Systems for narrow subject materials (Mousavinasab et al., 2021), Conversational roleplay for language learning in specific scenarios (Divekar et al., 2022), among others. Two, these tools are no longer living only in research labs with access limited to some students and practitioners; rather, many are easily accessible for free to anyone worldwide via a simple internet browser.
Duha (2023) has noted that this is not the first time technology has transformed education and that we have seen at least two such transformations with Google and Wikipedia. However, when a new technology like this is released and becomes popular within the education community, it will potentially disrupt existing technologies and workflows. For example, when search engines like Google.com or online encyclopedias like Wikipedia first became popular, it augmented and replaced (in some cases) traditional information sources like a library or subject-matter experts in universities. A student no longer had to go to the library or a subject matter expert for every question, rather they searched online (Brindley, 2006). This prompted libraries to pivot to integrating new technology and begin efforts to increase digital literacy.
Advertisement
We are at a similar crossroads again, with AI technology becoming a new source of information for university students. The educational community is caught in the middle of deciding what changes to make in curriculum, teaching, learning, technical resources, etc. (Črček & Patekar, 2023) to create a future generation of experts that can use artificial intelligence to their advantage. Although studies by Grassini et al. (2024) applied the Unified Theory of Acceptance and Use of Technology (UTAUT2) to find that performance expectancy and habit had an impact on students using ChatGPT, we do not know whether AI technology is being used by students in addition to or in substitution for the sources of information of the previous technology revolution, such as Google and Wikipedia. We also do not yet know how expertise manifests and develops in using AI tools like ChatGPT from a student’s perspective. ChatGPT's versatility enables it to operate across all four levels of the Substitution Augmentation Modification Redefinition (SAMR) model (Puentedura, 2013) in educational settings. At the Substitution level, it replaces traditional search engines for information retrieval. For Augmentation, it enhances information acquisition by explaining concepts in personalized narrative formats rather than presenting disconnected search results. More transformatively, at the Modification level, ChatGPT restructures learning activities by enabling interactive dialogue and immediate feedback. Finally, it can potentially Redefine learning experiences by creating personalized learning pathways, simulating complex scenarios, and facilitating creative processes previously impossible with traditional educational tools. From a revised Bloom’s Taxonomy (Krathwohl, 2002) perspective, students will find a transformative way to find, remember and understand new information as GenAI provides personalized narratives of search topics. GenAI can even go much further as to assist students with applying that information towards problem solving, evaluating their work by giving immediate feedback, and help them brainstorm and create new information such as essays for homework.
This wide-ranging impact across the SAMR and Bloom’s Taxonomy underscores why understanding students’ actual usage patterns is crucial for developing appropriate pedagogical responses and institutional policies.
Literature
Students’ AI Adoption in Higher Education
The International Monetary Fund estimates that about 40% of global employment is exposed to AI in that AI could either eliminate these jobs or play a complementary role and boost productivity (Melina et al., 2024). Finding suitable employment is one of the main goals of many students. Students are aware that AI will play an important role in their field and that some profiles might be replaced by AI (Almaraz-López et al., 2023). Therefore, it stands to reason that students are at least curious about tools like ChatGPT.
Sallam et al. (2024) recently conducted a survey in the UAE to investigate the acceptance of ChatGPT within the university student community. They saw that 85% of their respondents had used ChatGPT, higher than previously reported studies in other parts of the world that they have synthesized in their literature. Further, they found that attitude towards technology was correlated with sex, type of university, and ethnicity. Chan and Hu (2023) surveyed students in Hong Kong and found similar familiarity, positive attitude, and willingness to use GenAI within the student community. Albadarin et al. (2024)’s systematic literature review of empirical research found that ChatGPT was used for various writing tasks, feedback, on-demand answers, explanations, etc. However, they point out that ChatGPT may have negatively impacted collaboration and innovative tendencies in students.
Advertisement
Students also report using AI for various reasons related to their university activities and life. Crawford et al. (2024) found that AI tools, like ChatGPT, have been used to fill the social connections gap, especially for university students, with the caveat that when they filled the social need with AI, they experienced even more human isolation. The authors claim that a lack of social connection could result in a lower sense of belonging on campuses and consequently affect grades and student retention. Al-Zahrani and Alasmari (2024)’s broader survey has found that the higher education community in Saudi Arabia uses various AI tools that go beyond just GenAI and include face recognition, speech recognition, etc., for various purposes like educational, entertainment, etc. They report seeing similar negative experiences as other reports in the field, i.e., privacy issues, financial costs, etc. In addition, Park and Ahn (2024) conducted a qualitative study involving students in which they identified that usability, user experience, and scalability are opportunities for GenAI to shine in higher education. In contrast, decreased learning and problems with the technology, such as usability issues, and errors, such as hallucinations, were perceived challenges with GenAI. Walker et al. (2024) found that students under-trusted GenAI in their study that asked students to select answers to a multiple-choice questionnaire where one choice was labeled as GenAI-generated and another as human-expert written.
Various reasons for using GenAI have been documented by Niloy et al. (2024) including time saving, ease of access, cognitive miserliness, aided learning, among others. Stöhr et al. (2024) have found that only over 50% are positive on the use of such chatbots and favorability depends on gender, technical background, among other things. Whereas Espartinez (2024) find that both students and educators are motivated by ethically using GenAI for education such that the pedagogy is balanced.
Educators’ Outlook of AI in Higher Education
Kiryakova and Angelova (2023) presented a survey to university professors at Trakia University and found concerns from faculty members that learners will unquestioningly accept the output from ChatGPT to be correct, especially when ChatGPT tends to be incorrect (Islam & Islam, 2024; Rawte et al., 2023). However, the university professors also showed a positive outlook while using ChatGPT towards their teaching, e.g., to provoke interest in the subject, stimulate critical thinking, etc. (Kiryakova & Angelova, 2023). Further, Firat (2023) found in their qualitative analysis that the evolution of learning and educational systems and the changing role of educators were the most frequent code in interviews from scholars and students from four countries. A large-scale analysis of X (formerly Twitter), (Mamo et al., 2024) found that within the Higher-education faculty community, while sentiment towards ChatGPT moved positively over time, negative sentiments such as anger, fear, disgust, and sadness were also found. Plagiarism, job security, bias, threats to writing, disruption, assessment threats, and disinformation contributed to the negative sentiments. In any case, AI has received tremendous attention from education researchers as evidenced by Crompton and Burke (2023) who conducted a PRISMA review of 138 articles in AIEd and found an almost three-fold increase in the number of articles published in 2021–2022 compared to previous years.
Intersecting Perspectives: Students and Educators in the AI Era
University students seem to have widely embraced ChatGPT, forcing educators to respond in various ways: acceptance and rethinking of learning activities, rejection and vigilance towards students using ChatGPT, and general panic over academic dishonesty (Črček & Patekar, 2023). Given varied reactions, Črček and Patekar (2023) strongly encourage educators to embrace technology, although with clear policies and guidelines on its use to optimize learning.
Re-designing curricular and assessment activities for a class to account for technology that can sometimes produce material better than a field expert can be helped by knowing how students are already using GenAI. Although most previous studies have identified attitudes and adoption towards ChatGPT from students, those methodologies do not show how students use AI across various learning levels or in the context of already existing technology. Therefore, a holistic snapshot of a post-GenAI student toolkit is missing. It is crucial to get this snapshot as educators can evaluate tool usage against existing frameworks (e.g., SAMR or Bloom’s taxonomy as discussed before) and define learning outcomes with the knowledge of the range of technology used by students. Additionally, we also recognize that AI will be a tool in the future of work and that teaching students to be experts at using AI in their domain will likely lead to a similar wave of education as digital literacy was a few years ago. However, it is unclear what it means to be an expert in using AI from a student’s perspective. It is important to know students’ perspective on their own level of expertise so educators can derive digital literacy needs and enhance learning outcomes. In this paper, we bring these perspectives through our study. We uniquely contribute to the existing body of literature by asking four research questions:
RQ1: In a GenAI era, what is the prevalence of using different types of online tools (e.g., search engines, multimedia platforms) among university students for academic purposes?
RQ2: What are the use cases of AI use on university campuses?
RQ3: How do university students perceive and define their own expertise in using GenAI tools for academic work?
RQ4: How do students conceptualize AI and gain expertise in using GenAI tools, and what are their primary sources of information and support?
We present a mixed-method analysis of a survey of 68 students at a doctoral-granting business university in the United States. The results show that ChatGPT is still the most popular AI tool. However, we report that AI tools have not fully replaced ‘older’ tools such as search engines, Wikipedia, etc. In fact, search engines are still slightly more popular than AI tools within the student community. In addition, specific to AI tools, we note various uses of AI technology reported by students, such as writing, brainstorming, and creative uses, that we detail in our report and discuss its implications. We discuss what students see as expertise while using AI in their field and how students gain that expertise.
Methodology
A link to an electronic survey developed using the Qualtrics platform was sent through mailing lists to currently enrolled undergraduate and graduate students. The email invitation stated the eligibility and ineligibility criteria, mentioning that students currently enrolled or who wanted to enroll in classes with the project PI were not eligible to avoid biased responses. In addition, they were required to have some exposure to AI tools such as, but not limited to, ChatGPT. Given that our survey also had a significant qualitative aspect, we set a target size of approximately 60–65 students with a higher compensation of USD 5 gift card to encourage detailed qualitative responses. The survey was approved by the Institutional Review Board (IRB). Table 1 shows the list of questions relevant to the study. The questions were developed by PI to answer RQs motivated in the previous section. Quantitative questions were used to find trends and qualitative questions were used to launch a deeper and triangulation of trends. The research team verified the questions and piloted them to fine tune the phrasing. All responses, except the last question, were required to complete the survey. Other questions not mentioned in the table were related to obtaining informed consent, establishing eligibility criteria described above, and demographics.
Table 1
Select survey questions
Question
Response Type
How do you currently learn new topics on the Internet? (Select all that apply)
Multi-option with ability to fill in manually
Which AI tools have you used in the past?
Multi-option with ability to fill in manually
On average, how often do you use [Tool]?
Multi option
How do you use [Tool]?
Fill in, large text box
On a scale of 1 to 5, where 1 indicates minimal expertise in using AI tools and 5 denotes advanced proficiency, how would you rate your expertise in utilizing AI for your work?
Likert Scale 1–5
Please explain your expertise level and why you believe what you picked (e.g., if you have taken a course on prompting, read articles or tips/tricks related, feel like you always get the answer you want, etc.)
Fill in, large text box
Additionally, we welcome your thoughts on what it means to be an expert in AI usage. Please share your perspective
Fill in, large text box
[Tool] was automatically replaced with the name of each AI tool participants submitted in previous question
Quantitative Analysis
Demographics
Table 2 presents the age range, enrolled study level, gender, and grade point average (GPA) of the 68 students who participated in the survey. From Table 3, we see most of the students were from a business background and in good standing with their academic goals.
Table 2
Survey demographics
Description
Number of Students
Total Valid Responses
68
Age Range
18–20
35
21–23
25
24–26
7
27 or older
1
Enrolled Level of Study
Undergraduate
51
Graduate
17
Gender
Female
38
Male
30
GPA (max. 4)
Above 3.5
45
3–3.5
18
2.5–3
1
Below 2.5
1
Table 3
Self-reported majors, minors and areas of expertise
Majors and Minors
Number of Students
Finance
26
Accounting
13
Management
9
Business Analytics
7
Computer Info. Systems
7
Data Analytics
7
Economics
7
Information Design and Corporate Communication
5
Marketing
4
Spanish
2
Other
12
Number of Tools Used
We asked students how they currently learn on the Internet and offered a multi-select box with the option to type in additional responses. We found that, on average, people currently use 3.65 ± 1.21 (Min. 1, Max. 7) online tools to learn. We set a more conservative threshold of 3 and conducted a Wilcoxon signed-rank test suitable for non-parametric data like ours. It indicated that the number of tools used by participants was greater than 3, W = 792.0, p < 0.001 but less than 4, W = 426.0, p < 0.01. Figure 1 shows the boxplot with an indication that the median was between 3–4.
Fig. 1
Boxplot of number of tools used to learn on the internet
We normalized names manually, e.g., edited ‘co-pilot’ and ‘copilot’ to mean the same tool. Figure 2 shows the names of the tools that students indicated they use while learning on the Internet. Google is the most used tool, followed closely by ChatGPT, YouTube, and Wikipedia.
While Fig. 2 shows the frequency of each tool students indicated they use, multiple tools can be used similarly and toward the same goal. For example, Bing and Google are both search engine alternatives to each other. Therefore, we categorized these tools into types as shown in Table 4.
Table 4
Types of tools used by students
Tool Category
Tools under the category
Search
Google, Bing, Google Scholar, Brave Web Search
Multimedia Learning
YouTube
Encyclopedia
Wikipedia
Guided Learning
Coursera, Khan Academy, Data Camp, Free Code Camp
Social Media
TikTok, LinkedIn, Instagram
News
Wall St Journal, NY Times, LAT
Based on the categorization shown in Table 4, we calculated the frequency of students who indicated using each type of tool after removing duplicates. This process allowed us to count unique mentions by tool types; for example, if one student mentioned that they used WSJ and NYT, we counted it as one mention of News type. The popularity of types of tools that students use is seen in Fig. 3 where Search, AI, and Multimedia Learning are the top-3 types of tools students indicated using while learning on the Internet.
In addition, we also wanted to capture the number of tools students use per tool type to determine the variety of tools students use for each purpose. For example, if a student said that they use WSJ and NYT, that was two counts of tools for the tool type of News. On average, each student used close to one tool for each tool type, as shown in Table 5.
Table 5
Avg. number of tools used per person in each category
Tool Type
Mean ± Std. dev
Search
1.09 ± 0.29
Al
1.03 ± 0.19
Multimedia Learning
1 ± 0
Encyclopedia
1 ± 0
Guided learning
1.11 ± 0.3
Social Media
1.33 ± 0.6
News
N/A as only one person reported using news articles
We conducted McNemar’s test (Agresti, 2007, pp. 245–246) with continuity correction for paired binary data to assess student preferences between software tools for learning. We find that the proportion of individuals using Search without AI is higher than those using AI without search (p = 0.009). We also find a statistically significant difference in the proportion of individuals using Google without ChatGPT versus ChatGPT without Google (p = 0.026). The concordant and discordant pairs are seen in Appendix D.
AI Tool Usage
While the previous data captures current usage trends, focusing on AI tools specifically, we asked them which AI tools they had used in the past to capture familiarity with AI tools even if users are no longer using it for learning. All 68 participants mentioned they had used ChatGPT in the past. In addition, some of them also mentioned that they have used other tools, as seen in Table 6. We also find only 13 of the 68 students selected/entered an option other than ChatGPT. Out of the 13, only 4 selected/entered more than one additional option, with one person writing in 4 options in addition to ChatGPT.
Table 6
Familiarity with AI tools
Tool Name
Frequency
Description
ChatGPT
68
Multi-purpose LLM by OpenAI
Bard
3
Multi-purpose LLM by Google
Claude
3
Multi-purpose LLM by Anthropic
Dall-e
2
Image generation model by OpenAI
Automata
1
Marketing and Content Re-purposing Model
Beautiful.ai
1
AI Presentation Maker
Bing AI
1
LLM combined with search by Microsoft
Descript
1
Podcast and Video Editor
Grammarly
1
Writing Assistant with Plagiarism Checks
HyperWrite
1
Writing Assistant
Perplexity
1
LLM Combined with Search
Snapchat AI
1
Multi-purpose Assistant Combined with Social Media app
Writier
1
Writing assistant
Snapchat AI
1
Multi-purpose Assistant Combined with Social Media app
In the same table, we have added the third column describing the AI tool for readers’ ease by searching the name on Google, visiting the tool’s website, or from our own familiarity. However, the landscape of AI tools is rapidly changing, and they may differ from what we have established as their description by the time of publication.
In addition, we asked them how often they used AI tools for each tool. Figure 4 shows responses for how often they use ChatGPT. Among those who reported using other AI tools, one respondent mentioned using Perplexity 2–3 times a week, while three others indicated using other tools only rarely. One person noted using Co-pilot and DALL-E as frequently and in tandem with ChatGPT.
We wanted to understand if students using AI tools consider themselves to be proficient at its use. We asked students to rate their expertise in utilizing AI for their academic work on a Likert scale. Figure 5 shows the response graph where 1 meant minimal expertise and 5 means advanced expertise. Students responded with a mean = 3.2 ± 0.97, median = 3.
A Wilcoxon ranked-sum test indicated the population median of the dataset was significantly greater than 3 (three was labeled neutral on the Likert scale), W = 655.5, p < 0.05 indicating that most students at least considered themselves more than neutral-expertise.
Qualitative Analysis of Usage Patterns
Consistent with qualitative research best practices (Maxwell, 2010), we coded open-ended responses and report themes here. Quotes are lightly editorialized for better readability e.g., by removing typing errors.
Academic: Learning and Professional Development
We coded and then thematically arranged learning related use cases into Bloom’s revised Taxonomy, a hierarchical framework that classifies learning objectives into six cognitive levels with remembering at the lowest and creating at the highest; a comprehensive explanation is provided by Krathwohl (2002). Two coders (authors) jointly explored the dataset to create the support provided statements seen in Table 7 along with exemplars. Then, the two coders independently coded the rest of the dataset using the support provided and exemplars. Coders were allowed to use multiple codes per datapoint to respect the interpretation of some ambiguous responses. After independently coding the dataset, the two coders resolved any inconsistent codes in a joint discussion where resolution was reached by a common understanding of underlying meaning from quotes. Where resolution was not reached, differences were kept as is. We used Atlasti to code and used its inter-coder Agreement feature to find a 75% simple percent agreement and Krippendorff’s Cu-\(\alpha =0.839\). A codebook with frequency counts is provided in Appendix A to enhance transparency, while acknowledging that, in qualitative research, numerical counts should be interpreted cautiously rather than at their absolute value. Therefore, we identify the cognitive levels at which students use AI while learning. We found that while there is evidence for AI’s involvement at each cognitive level, most of the use cases fall under the learning tasks that engage students in objectives related to creating and understanding. The results can be seen in Table 7.
Table 7
Where does AI assist? Organized by Bloom’s taxonomy from the lowest to the highest level
Cognitive Level
Support Provided
Exemplar Evidence
Remember
AI helps users recall and retrieve factual information efficiently
“To… learn new facts” (p63), “…simple search engine features” (p47)
Understand
AI clarifies complex concepts, simplifies information, and aids comprehension by summarizing and rephrasing content in an accessible manner
“I find that asking it to explain a complex topic as if I am a younger student” (p48), “To help explain homework concepts that are difficult to understand” (p65), “Some new content I don’t understand (such as some technical terms and let ChatGPT explain… in a concise and easy understanding way” (p4)
Apply
AI facilitates application by providing step-by-step guidance, solutions, and actionable steps that users can directly implement in problem-solving contexts
“I do it as a [means] of getting a step-by-step to solving economics problems” (p31)
Analyze
AI aids in dissecting information, structuring thoughts to better understand relationships and hierarchies within content
“…to help me organize thoughts” (p61), “Especially when writing critical analysis papers, I use AI to aid… summarizing readings” (p62)
Evaluate
The tool enables critical evaluation of written material by suggesting improvements, offering alternative word choices, and enhancing professional communication
“…read over things I write and give me feedback” (p29), “Adjust to a more professional tone and expand my vocabulary” (p13)
Create
AI supports the creative process by enabling idea generation, facilitating new content creation, and offering tools for refining and organizing original work
“To help me start brainstorming ideas” (p9), “Idea generation” (p34), “…helping build outlines for papers” (p62), “I’ve used it to build apps, websites” (p26)
University: Non-academic
Students in universities engage in a lot more than just academic learning. This is evident by the number of events planned for them on any large campus that sometimes include concerts, meet-and-greets, etc. We found three broad themes of AI usage for campus-life related goals. Due to the limited number of responses related to non-academic usage, this data was coded by a single author. While this approach did not necessitate inter-rater reliability calculations, the coding process followed systematic qualitative analysis protocols. The identified trends, presented below, offer valuable insights despite these methodological limitations.
Hobbies and Entertainment
“Usually for entertainment purposes…” (p36), “Usually for fun” (p8), “I also use it for my hobbies, where it helps me find new music and ideas.” (p60), “…rewriting songs” (p59)
Health Advice
“therapy reasons in the past."(p11), “…Medical information” (p33)
Personal Development
“…some personal advice"(p60)
Social Connections
“…look for date ideas"(p24)
Qualitative Analysis of Expertise with GenAI
In section Self‑reported Expertise with AI, we showed results of students’ self-reported expertise on a Likert Scale. Here, we discuss their qualitative responses to why they self-rated their expertise as they did and their reflection on what it means to be an expert in AI. The results provide a snapshot of students’ perception of expertise rather than an objective measurement; this is still important as we know from human–computer interaction research that perception of tool expertise can have a significant impact on usage and trust with chatbots (Nordheim et al., 2019) and online tools (Corritore et al., 2003). Further, knowing self-rated expertise allows educators to gauge the work to bust overconfident or under-confident usage of AI. This data was coded by one researcher, as a result, inter-rater reliability is not applicable. The codebook is available in Appendix B and Appendix C. Quotes are lightly editorialized for better readability e.g., by removing typing errors.
There is Awareness and Application of Advanced Methods in Using AI, often Learned from Informal Sources
Few participants demonstrated an awareness and application of advanced AI methods like prompt engineering. While a few have pursued formal education to enhance their skills—one participant shared, “I took class ACxxx this semester that discussed ChatGPT,” indicating structured learning in AI usage—the majority have taken the initiative to self-educate. Many consumed information online through informal sources such as online articles and videos; for instance, one mentioned, “I have read articles on improving prompts,"and another stated,"I watch videos on how to prompt engineer.” Social media platforms also played a significant role in their learning, as one participant responded, “I learnt a lot of tricks from Instagram reels."
Expertise in Using AI is Built by Repeated Usage of AI
Most people indicated that their expertise came from self-learning by using GenAI. They didn’t think that it was “…hard to use ChatGPT” as repeated usage along with trial and error helped them get better at using AI: “I feel I have experimented with ChatGPT long enough to find out short cuts to getting answers that I want and that are correct”. Some also attributed to the technology becoming better, e.g., “it’s very user-friendly nowadays, and I do not struggle with using it.”
Perceived Level of One’s Skill Often Does Not Align with the Expected Definition or Level of Expertise Provided by Them
Many users define expertise in specific terms (e.g., “Understand the algorithms and limitations of AI”, “…understand both the tools but also the consequences and ethics behind them”). However, their self-assessed skill levels are largely based on their ability to craft effective prompts or achieve specific results, e.g., “I am able to phrase my requests in a way that generates the answer that I am looking for.”
Some Conceptualize Technology by Humanizing It
Conceptualizations and metaphors have been used with computers to internalize new and complex technology. We found two metaphors about how people describe their own expertise using relationship metaphors. One called it “a student–teacher [relationship] where I have to teach the teacher to teach me…” The other mentioned “…just talk to ChatGPT (prompt) like [it is] a child and specify the environment and the answer I am looking for."
Discussion
RQ1: Student Technology Stack in the GenAI Era
Educators often only have a few hours of contact with their students. Therefore, much of learning occurs outside the classroom. The university where this study was conducted follows the credit hour system that recommends at least twice as many outside-classroom learning hours (e.g., homework) for each in-class credit hour. It is often unclear to an instructor how these additional hours are exactly used by the students. Given that the technology had not dramatically changed in the past decade, instructors had a fair idea of how students could be using online tools such as Google or Wikipedia to learn in their homework hours. Therefore, the learning goals, assignments, and verification steps, such as use of plagiarism checkers, were often calibrated carefully knowing the existence and widespread use of the tools while still maximizing learning outcomes. However, with the advent of GenAI, there is ambiguity if GenAI is being used, how it is being used, and whether it has replaced the status quo technology stack of learners. Knowing not just the extent of GenAI usage but its usage in the context of existing technology allows instructors to have an insight into a student’s out-of-class learning hours. Therefore, with our findings, instructors can develop a better sense of how those hours are used by understanding what tools are used for learning outside the classroom. They may then be able to calibrate learning activities that consider the technology stack of a post-AI era student.
We saw from our quantitative analysis and statistical significance tests that most participants use more than 3 tools to learn, while some participants use up to 7 tools. Further, we see McNemar's test confirmed that the Google and Search Engine category outdo the use of AI tools like ChatGPT. This shows that as much as ChatGPT has the promise of serving education needs, it may not be serving all of them as students use more than just AI tools.
We see that although the inclination of previous work showing that AI has entered the student learning toolkit is correct; it is not the case that AI has completely replaced other online tools. In fact, it is not even the top tool yet. This finding potentially alleviates some concerns (Kiryakova & Angelova, 2023) presented that learners will unquestionably accept the output of ChatGPT, as we show that participants tend to use multiple tools. In addition, while previous transformations in education with Google and Wikipedia (Duha, 2023) have prompted pivots from existing systems such as libraries, we see that ChatGPT has failed to fully dethrone the search engine as the most used tool to learn. Further, Wikipedia, YouTube, and other information sources remain popular.
Closely following the Search and AI tool categories, we see students using multimedia tools like YouTube to learn online. This is not far from prior research that showed YouTube as a favorable learning tool (Maziriri et al., 2020). However, we add that even in the age where AI can summarize content from various sources and even YouTube videos, there is still a desire to learn via YouTube directly, possibly hinting at the preference for non-textual modalities.
Sanasintani (2023)’s survey found that higher education program students are not only familiar with AI concepts but prefer it over traditional educational methods. However, in the results section, we reported that only 13 of 68 students selected an option other than ChatGPT. Out of the 13, only 4 selected more than one additional option, with one person writing in 4 options in addition to ChatGPT. We find that the familiarity of tools beyond the most popular ChatGPT rests within only a small subset of the sample. This indicates that AI familiarity and education are not equal even within one university. The lack of knowledge of other specialized AI tools that may work better for specific domains (e.g., business analytics, advanced equation solving) can put students at a disadvantage as they may not be using the best tool for the job.
Furthermore, we note that while many tools were listed under “tools used in the past” (see Table 6), fewer AI tools–primarily multipurpose LLMs–were mentioned when participants were asked about current tool usage (see Fig. 2). This may suggest that narrow-use AI tools either did not meet students’ needs or the multipurpose LLMs performed well at such a wide range of tasks that narrow-use tools were deemed unnecessary. Given the cost associated with each tool’s long-term, high-volume use, we observed a shift in our sample from narrow-use AI tools to ChatGPT.
When using AI tools specifically, in contrast to Singh et al. (2023), who noted in their survey that students did not frequently use ChatGPT for academic purposes, we saw most participants use it daily or 2–3 times a week. We believe that the popularity and quality of the tool have increased since previous research was published in the student community, leading to more adoption.
RQ2: AI Tool Use Cases on University Campuses
Similarly to Chauke et al. (2024), Črček and Patekar (2023), and Crompton and Burke (2024), we see that students use AI tools to generate ideas, paraphrase, summarize, and proofread text, among other writing activities. This isn’t surprising, as writing is perhaps the most common artifact students submit and get graded on. While we see that students indicate using AI quite frequently, qualitative analysis gives us an insight into the cognitive activities in which students report AI can assist them. Overall, we see evidence from Table 7 that AI is used by students to assist with all cognitive levels of Bloom’s taxonomy. However, we note that most of the use cases were found to overlap with Understanding and Creating levels. Further research should investigate whether to skip the levels of Remembering, Applying, Analyzing, and Evaluating as fewer quotes in our dataset align with those use cases. As we see that AI is used the most at the creation stage, we remind readers that AI can produce incorrect information. That, combined with the possibility that students are not engaging analytically or critically when it comes to AI, might be troublesome.
Further, we caution that when students use AI tools for writing, even in early processes such as brainstorming or finding an idea, they might be nudged into one or another direction (Jakesch et al., 2023), which can hamper their ability to think creatively. However, AI can eliminate the writer’s block problem by creating an outline and helping students brainstorm interactively. In addition, students who do not speak English as their first language might find a great help from tools like ChatGPT (Yan, 2023). The positive impact on language acquisition has also been seen in our data, where students mentioned that ChatGPT helped expand their vocabulary and help with the tone of their writing.
The second most AI-assisted cognitive task was understanding, as many participants mentioned ChatGPT’s ability to distill information into coherent texts. They mention that this ability can break down a topic so it can be explained at an easier level than how it was first introduced to the students. Some even say that ChatGPT can act as their tutor. However, we note that in the specific case of a math tutor, Bastani et al. (2024) found that ChatGPT can harm education by acting as a “crutch.” They found that while ChatGPT can make tasks easier for humans, when its access was taken away from students, students performed worse than students who never had access to ChatGPT. Crawford et al. (2024) also warn against substituting AI for humans (e.g., in the case of learning complex topics, brainstorming, etc.) as it can be detrimental to the student from a social perspective.
University life is complex for a student as for many of them, the university is their living and working space. That means, in addition to learning, they rely on university-enabled spaces and events to access healthcare, form social connections, and engage in personal growth and hobbies, among other holistically enriching activities. While educators have been most concerned about the misuse of AI tools for learning (e.g., plagiarism), we found in our qualitative analysis AI has permeated campus life broadly. Students noted using AI not only for innocuous uses such as finding entertainment and engaging in personal growth, but they also mentioned troubling use cases such as going to AI for mental and physical health advice. These use cases are troubling because GenAI systems do not always handle public health related questions well (Ayers et al., 2023) sometimes leading to poor advice that can lead to severe bodily harm, as noted by El Atillah (2023).
RQ3, RQ4: Expertise, Knowledge Sources and Conceptualizations of AI
How Students Gained Expertise
Thematic analysis and word frequencies illustrate that “using"was the most common word while describing one’s own expertise, as well as describing what constitutes as expertise, implying that the popular understanding was that one who can use AI to their advantage is an expert. It points towards three things: One, the technology itself and the user experience are simple enough to use; similar to Romero-Rodríguez et al. (2023), who found that the user experience was the fundamental determinant of ChatGPT acceptance and that ChatGPT has an easy-to-use interface (Turmuzi et al., 2024). Two, student perceptions go against what some educators have thought that technology must be studied before being used (Kiryakova & Angelova, 2023). Three, several course offerings and formal training opportunities from universities, MOOCs, etc. in AI, Prompt Engineering, or LLM might not resonate with this demographic of students who think they can gain expertise only by trial-and-error unless a clear additional value can be demonstrated.
Further, many participants did not mention domain knowledge or advanced technical knowledge as a pre-requite for expertise. In Ngo (2023)’s study, students were aware of this issue; however, we show a possible deviation from the finding, implying that awareness of such issues might not be widespread.
Discrepancy Between Self-rated Expertise and Qualitative Responses
Many students rated themselves as more-than-neutral experts at using AI. In qualitative responses, they broadly attributed expertise as not much beyond “being able to use it”. The lack of deep articulation by students about their expertise in qualitative data compared with a higher-rated self-expertise seen in quantitative data may be an indicator of overestimation. The overestimation may have been caused because the user interface is easy to use and GenAI produces fluent output irrespective of the input. The fluency, despite errors, may produce a sense of task completion for the user leading to the belief that they can effectively use AI. This overestimated self-expertise could lead to overconfidence bias that may be to their detriment while using AI. Similarly, in Divekar et al. (2025), we found that in absence of sources and domain expertise, students use mental shortcuts and feelings (or, vibes) to decide whether a response from GenAI can be trusted and display a knowledge paradox where students go to GenAI to learn but cannot fully trust the output because they do not have the expertise to evaluate it.
Sources of AI Information
In the absence of abundant formal programs, students have turned to informal sources of information, such as blogs and social media, to get their information on AI. This may have potentially led to the discrepancy of self-rated expertise on Likert scale and the lack of articulation on what it means to be an expert in responses. For educators, this can be a troubling trend, as informal sources are ripe for spreading misinformation and can lead to an unbalanced understanding of the technology if it is the only source.
Metaphors for Learning
Metaphors and anthropomorphizations are often used to conceptualize new technology (Hurtienne & Blessing, 2007), especially one such as GenAI that is sufficiently complex in its working and can perform a variety of tasks at scale with non-trivial quality. We found two metaphors without specifically asking the students to identify them. They were mentioned in the context of expertise while using AI. In both metaphors, participants used anthropomorphizations and called ChatGPT a ‘Child’ and a ‘Teacher’ that needs to be taught. In both cases, the participants indicated that their relationship with AI was similar to that of the human guide. These metaphors reveal insights into how users conceptualize their agency and authority when interacting with AI systems. The student–teacher metaphor suggests a recursive educational relationship where the user paradoxically occupies both roles simultaneously—they must “teach” the system how to properly instruct them, creating a complex power dynamic where expertise flows bidirectionally. In contrast, the parent–child metaphor positions the user in a more explicitly authoritative role, framing the AI as a developmentally incomplete entity requiring guidance, specificity, and contextual framing to perform adequately. The metaphors raise questions whether the anthropomorphization of this technology is fair, allows users to set interactional expectations, and brings into focus the evolving socio-cultural inclusion of AI tools in daily life. We note that some metaphors can be problematic (Mollick, 2024) e.g., over-anthropomorphizing AI as a teacher could lead a student to attributing a similar level of trust to AI as they do to their teacher even when AI cannot guarantee true or verified responses and does not have the same level of responsibility or duty towards the students.
Implications for Curriculum Design and Assessment
The prevalence of ChatGPT for understanding and creating output (e.g., homework) introduces significant challenges to curriculum design. ChatGPT's probabilistic nature means it cannot guarantee accurate information, potentially leading students to uncritically consume AI-generated content. This risk is heightened for students still developing critical thinking abilities and domain expertise necessary to evaluate information quality. We recommend curriculum designers incorporate AI tools like ChatGPT into learning activities that explicitly demonstrate both benefits and limitations of these technologies. Assignments could require students to evaluate AI outputs critically against their knowledge and lived experiences.
Encouragingly, our findings suggest students aren't completely offloading learning to AI tools. Instead, the data indicates strategic use of multiple tools depending on the task. However, we also observed that students significantly overestimate their expertise with AI, even using these tools in high-risk scenarios, such as seeking medical advice. Educators should leverage students'existing strategic approaches while systematically teaching appropriate tool selection and effective usage techniques. This education must be context specific rather than generic, as AI's benefits and limitations manifest differently across disciplines. Each course vulnerable to AI influence should include nuanced discussions about appropriate AI integration specific to that subject matter. However, this is easier said than done, as those educators without pre-existing AI expertise will find themselves under-prepared in AI and therefore disseminating knowledge related to it. Systemic support and learning opportunities for educators will be crucial to bridge this gap.
Assessment design faces challenges when students can generate seemingly thoughtful responses using AI. The potential scenario where students use AI to complete assignments about critically evaluating AI creates a recursive problem for educators. We recommend assessment approaches that evaluate visible processes rather than just final products—through class discussions, collaborative activities, or multi-stage submissions that document students’ interactions with learning material and AI tools.
Given that students use AI primarily to create (as defined in Table 7), an effective assessment approach involves designing assignments that require students to move beyond what AI can easily produce such that students can be evaluated on their broader impact through the higher echelons of the Kirkpatrick Model (Kirkpatrick & Kirkpatrick, 2006). That is, evaluating students by observing behavior change in classroom discussions or via positive impact on their community, e.g., through service-learning projects. While appropriate proctoring may be necessary in some contexts, we advise against relying on AI detection software, as research (Casal & Kessler, 2023; Fleckenstein et al., 2024; Wu et al., 2025) consistently demonstrates that neither humans nor machines can reliably distinguish between AI-generated and human-written content.
Limitations and Future Scope
Our findings are based on a focused sample of 68 students at a doctoral-granting business university. The sample size allowed for an in-depth qualitative data collection, providing rich contextual information and nuanced perspectives as compared to large-scale online surveys. We believe it was possible to elicit rich information from open-ended questions because we provided a higher-than-average reward. At the same time, the budget and logistics of an in-depth survey put limits on the size and diversity of participants. Even with a smaller sample, our quantitative research showed statistically significant differences. The qualitative data collection and analysis showed depth behind how these tools were used and brought to light information that can open new lines of inquiry as discussed in above sections. Further, the sample includes participants who self-selected into the survey and had some familiarity with AI tools causing a potential skew towards responses from students who are more engaged with the technology or had stronger opinions about using tools like ChatGPT for education. We mitigated this bias by sending recruitment emails to the entire enrolled student mailing list reaching a broad range of students across graduate and undergraduate classes. The analysis is also limited by the state of AI tools available during the time of the study. AI tools are rapidly progressing and as they get more mature, it would be interesting to see students expand their usage to the rest of Bloom’s taxonomy. Future research could explore dynamics in a more randomized study across a wider range of institutions and student demographics to identify the complex workflows of a student that combine various tools for academic work. Our research unexpectedly revealed emerging conceptualizations of GenAI through metaphors like teacher-student and parent–child relationships. However, since our study wasn't specifically designed to investigate conceptual frameworks for AI technology, these findings, while intriguing, remain preliminary. Future research could systematically explore how learners develop and apply metaphorical frameworks to understand complex AI tools, potentially revealing important insights about user expectations, relationships with technology, and effective integration of AI in educational contexts.
Conclusion
Our research reveals the technology toolkit of a student in the GenAI era. Contrary to a growing belief, we see GenAI has not replaced traditional technologies. Search engines, multimedia platforms, and crowd-sourced encyclopedia platforms are still common. However, we find that only one brand of GenAI (ChatGPT) is well known in the student community and that the lack of knowledge of other tools may be disadvantageous.
We find various use cases like writing, learning, etc. for AI use on campuses and contextualize them in Bloom’s taxonomy, showing that students use AI to help with all cognitive tasks but especially with understanding and creating. Further, we find unique trends where students use AI beyond pure academics, like using AI for entertainment and social connections, and troubling trends, like using AI for healthcare needs.
The concept of expertise in AI remains multifaceted, with students attributing it to primarily learning by trial-and-error rather than any formal introduction or education. Further, students overestimate their expertise as they describe themselves as someone who can get useful output from AI rather than someone with foundational knowledge of the technology. We find the sources of information that students used to learn about AI are primarily informal. We provide implications for curriculum and assessment design to bridge the gap and offset downsides of AI use in the student community.
Acknowledgements
The researchers thank the Valente Research for Arts and Sciences and Faculty Affairs Committee at Bentley University for funding this research.
Declarations
The study presented in this paper was approved by the Institutional Review Board (IRB) of the university. There are no conflicts of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Agresti, A. (2007). Categorical Data Analysis (2nd ed., pp. 245–246). Wiley. Albadarin, Y., Saqr, M., Pope, N., & Tukiainen, M. (2024). A systematic literature review of empirical research on ChatGPT in education. Discover Education, 3(1). https://doi.org/10.1007/s44217-024-00138-2
Almaraz-López, C., Almaraz-Menéndez, F., & López-Esteban, C. (2023). Comparative Study of the Attitudes and Perceptions of University Students in Business Administration and Management and in Education toward Artificial Intelligence. Education Sciences, 13(6). https://doi.org/10.3390/educsci13060609
Al-Zahrani, A. M., & Alasmari, T. M. (2024). Exploring the impact of artificial intelligence on higher education: The dynamics of ethical, social, and educational implications. Humanities and Social Sciences Communications,11(1), 1–12.CrossRef
El Atillah, I. (2023). Man ends his life after an AI chatbot ‘encouraged’him to sacrifice himself to stop climate change. Euronews. Com.
Ayers, J. W., Zhu, Z., Poliak, A., Leas, E. C., Dredze, M., Hogarth, M., & Smith, D. M. (2023). Evaluating artificial intelligence responses to public health questions. JAMA Network Open,6(6), e2317517–e2317517.CrossRef
Bastani, H., Bastani, O., Sungu, A., Ge, H., Kabakcı, Ö., & Mariman, R. (2024). Generative AI can harm learning. Available at SSRN 4895486.
Brindley, L. (2006). Re-defining the library. Library Hi Tech,24(4), 484–495.CrossRef
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., et al. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems,33, 1877–1901.
Casal, J. E., & Kessler, M. (2023). Can linguists distinguish between ChatGPT/AI and human writing?: A study of research ethics and academic publishing. Research Methods in Applied Linguistics,2(3), 100068.CrossRef
Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1). https://doi.org/10.1186/s41239-023-00411-8
Chauke, T. A., Mkhize, T. R., Methi, L., & Dlamini, N. (2024). Postgraduate students’ perceptions on the benefits associated with artificial intelligence tools for academic success: The use of the ChatGPT AI tool. Journal of Curriculum Studies Research,6(1), 44–59. https://doi.org/10.46303/jcsr.2024.4CrossRef
Corritore, C. L., Kracher, B., & Wiedenbeck, S. (2003). On-line trust: Concepts, evolving themes, a model. International Journal of Human-Computer Studies,58(6), 737–758.CrossRef
Crawford, J., Allen, K. A., Pani, B., & Cowling, M. (2024). When artificial intelligence substitutes humans in higher education: The cost of loneliness, student success, and retention. Studies in Higher Education,49(5), 883–897. https://doi.org/10.1080/03075079.2024.2326956CrossRef
Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: the state of the field. International Journal of Educational Technology in Higher Education, 20(1). https://doi.org/10.1186/s41239-023-00392-8
Divekar, R. R., Guerra, S., Gonzalez, L., Boos, N., & Zhou, H. (in press). Exploring undercurrents of learning tensions in an LLM-enhanced landscape: A student-centered qualitative perspective on LLM vs Search. In Proceedings of the 2025 International Conference on Artificial Intelligence in Education (AIED 2025)
Divekar, R. R., Drozdal, J., Chabot, S., Zhou, Y., Su, H., Chen, Y., … Braasch, J. (2022). Foreign language acquisition via artificial intelligence and extended reality: Design and evaluation. Computer Assisted Language Learning, 35(9), 2332–2360.
Espartinez, A. S. (2024). Exploring student and teacher perceptions of ChatGPT use in higher education: A q-methodology study. Computers and Education: Artificial Intelligence,7, 100264.
Firat, M. (2023). What ChatGPT means for universities: Perceptions of scholars and students. Journal of Applied Learning and Teaching,6(1), 57–63. https://doi.org/10.37074/jalt.2023.6.1.22CrossRef
Fleckenstein, J., Meyer, J., Jansen, T., Keller, S. D., Köller, O., & Möller, J. (2024). Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays. Computers and Education: Artificial Intelligence,6, 100209.
Ghosh, D., & Beigman Klebanov, B. (2024). Automatic evaluation of argumentative writing by young students.
Grassini, S., Aasen, M. L., & Møgelvang, A. (2024). Understanding university students’ acceptance of ChatGPT: Insights from the UTAUT2 model. Applied Artificial Intelligence,38(1), 2371168.CrossRef
Hurtienne, J., & Blessing, L. (2007). Metaphors as tools for intuitive interaction with technology. Metaphorik. De,12(2), 21–52.
Islam, I., & Islam, M. N. (2024). Exploring the opportunities and challenges of ChatGPT in academia. Discover Education, 3(1). https://doi.org/10.1007/s44217-024-00114-w
Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L., & Naaman, M. (2023). Co-writing with opinionated language models affects users’ views. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–15.
Kirkpatrick, D., & Kirkpatrick, J. (2006). Evaluating training programs: The four levels. Berrett-Koehler Publishers.
Kiryakova, G., & Angelova, N. (2023). ChatGPT—A challenging tool for the university professors in their teaching practice [ChatGPT: Una Herramienta Desafiante para los Profesores Universitarios en su Práctica Docente]. Education Sciences, 13(1056), 1–19. Retrieved from https://doi.org/10.3390/educsci13101056
Krathwohl, D. (2002). A revision bloom’s taxonomy: An overview. Theory into Practice.
Mamo, Y., Crompton, H., Burke, D., & Nickel, C. (2024). Higher education faculty perceptions of ChatGPT and the influencing factors: A sentiment analysis of x. TechTrends,68(3), 520–534.CrossRef
Maxwell, J. A. (2010). Using numbers in qualitative research. Qualitative Inquiry,16(6), 475–482.CrossRef
Maziriri, E. T., Gapa, P., & Chuchu, T. (2020). Student perceptions towards the use of YouTube as an educational tool for learning and tutorials. International Journal of Instruction,13(2), 119–138.CrossRef
Melina, G., Panton, A. J., Pizzinelli, C., Rockall, E., & Tavares, M. M. (2024). Gen-AI: Artificial intelligence and the future of work.
Mollick, E. (2024). Co-intelligence. Random House UK.
Mousavinasab, E., Zarifsanaiey, N., R. Niakan Kalhori, S., Rakhshan, M., Keikha, L., & Ghazi Saeedi, M. (2021). Intelligent tutoring systems: A systematic review of characteristics, applications, and evaluation methods. Interactive Learning Environments, 29(1), 142–163
Ngo, T. T. A. (2023). The perception by university students of the use of ChatGPT in education. International Journal of Emerging Technologies in Learning (Online),18(17), 4.CrossRef
Niloy, A. C., Bari, M. A., Sultana, J., Chowdhury, R., Raisa, F. M., Islam, A., et al. (2024). Why do students use ChatGPT? Answering through a triangulation approach. Computers and Education: Artificial Intelligence,6, 100208.
Nordheim, C. B., Følstad, A., & Bjørkli, C. A. (2019). An initial model of trust in chatbots for customer service—findings from a questionnaire study. Interacting with Computers,31(3), 317–335.CrossRef
Park, H., & Ahn, D. (2024). The promise and peril of ChatGPT in higher education: Opportunities, challenges, and design implications. Proceedings of the CHI Conference on Human Factors in Computing Systems, 1–21.
Puentedura, R. R. (2013). SAMR: Getting to transformation. Retrieved May, 31, 265–283.
Rawte, V., Chakraborty, S., Pathak, A., Sarkar, A., Tonmoy, S., Chadha, A., … Das, A. (2023). The troubling emergence of hallucination in large language models–an extensive definition, quantification, and prescriptive remediations. arXiv Preprint arXiv:2310.04988.
Romero-Rodríguez, J. M., Ramírez-Montoya, M. S., Buenestado-Fernández, M., & Lara-Lara, F. (2023). Use of ChatGPT at University as a Tool for Complex Thinking: Students’ Perceived Usefulness. Journal of New Approaches in Educational Research,12(2), 323–339. https://doi.org/10.7821/naer.2023.7.1458CrossRef
Sallam, M., Elsayed, W., Al-Shorbagy, M., Barakat, M., Khatib, S. E., Ghach, W., … Malaeb, D. (2024). ChatGPT Usage and Attitudes are Driven by Perceptions of Usefulness, Ease of Use, Risks, and Psycho-Social Impact: A Study among University Students in the UAE. 1–17. Retrieved from https://www.researchsquare.com/article/rs-3905717/latest
Sanasintani, S. (2023). Revitalizing the higher education curriculum through an artificial intelligence approach: An overview. Journal of Social Science Utilizing Technology,1(4), 239–248. https://doi.org/10.55849/jssut.v1i4.670CrossRef
Singh, H., Tayarani-Najaran, M.-H., & Yaqoob, M. (2023). Exploring computer science students’ perception of ChatGPT in higher education: A descriptive and correlation study. Education Sciences,13(9), 924.CrossRef
Stöhr, C., Ou, A. W., & Malmström, H. (2024). Perceptions and usage of AI chatbots among students in higher education across genders, academic levels and fields of study. Computers and Education: Artificial Intelligence,7, 100259.
Turmuzi, M., Suharta, I. G. P., Astawa, I. W. P., & Suparta, I. N. (2024). Perceptions of primary school teacher education students to the use of ChatGPT to support learning in the digital era. International Journal of Information and Education Technology,14(5), 721–732.CrossRef
Walker, F., Favetta, M., Hasker, L., & Walker, R. (2024). They prefer humans! Experimental measurement of student trust in ChatGPT. Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, 1–7.
Wu, J., Yang, S., Zhan, R., Yuan, Y., Chao, L. S., & Wong, D. F. (2025). A survey on LLM-generated text detection: Necessity, methods, and future directions. Computational Linguistics, 1–66.
Yan, D. (2023). Impact of ChatGPT on learners in a L2 writing practicum: An exploratory investigation. Education and Information Technologies,28(11), 13943–13967.CrossRef