Next Article in Journal
A Partial Least Squares Structural Equation Modeling of Robotics Implementation for Sustainable Building Projects: A Case in Nigeria
Next Article in Special Issue
Research Landscape of Adaptive Learning in Education: A Bibliometric Study on Research Publications from 2000 to 2022
Previous Article in Journal
Investigating the Routes toward Environmental Sustainability: Fresh Insights from Korea
Previous Article in Special Issue
The Effects of Different Patterns of Group Collaborative Learning on Fourth-Grade Students’ Creative Thinking in a Digital Artificial Intelligence Course
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modeling Chinese Secondary School Students’ Behavioral Intentions to Learn Artificial Intelligence with the Theory of Planned Behavior and Self-Determination Theory

1
Department of Curriculum and Instruction, Centre for Learning Sciences and Technologies, The Chinese University of Hong Kong, Hong Kong, China
2
College of Control Science and Engineering, China University of Petroleum (East China), Qingdao 266580, China
3
School of Education Information Technology, South China Normal University, Guangzhou 510631, China
4
Guangdong Engineering Technology Research Center of Smart Learning, Guangzhou 510631, China
5
Guangdong Provincial Institute of Elementary Education and Information Technology, Guangzhou 510631, China
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(1), 605; https://doi.org/10.3390/su15010605
Submission received: 24 November 2022 / Revised: 22 December 2022 / Accepted: 23 December 2022 / Published: 29 December 2022
(This article belongs to the Special Issue Digital Education for Sustaining Our Society)

Abstract

:
It has become essential for current learners to gain basic literacy and competencies for artificial intelligence (AI). While educators and education authorities are beginning to design AI curricula, empirical studies on students’ perceptions of learning AI are still rare. This study examined a research model that synthesized the theory of planned behavior and the self-determination theory. The model explains students’ behavioral intention to learn AI. The model depicts the interrelationships among the factors of AI knowledge, programming efficacy, autonomy, AI for social good, and learning resources. The participants were 509 secondary school students who completed a series of AI lessons and a survey. The factor analyses revealed that our proposed instrument in the survey possesses construct validity and good reliability. Our further analysis supported that design of learning resources, autonomy, and AI for social good predicted behavioral intention to learn AI. However, unexpected findings were presented (i.e., AI knowledge failed to predict social good and programming efficacy negatively influenced autonomy). The findings serve as a reference for the future development of AI education in schools by noting that the design of the AI curriculum should take students’ needs and satisfaction into account to facilitate their continuous development of AI competencies.

1. Introduction

Undergirded by big data, cloud computing, and machine learning, artificial intelligence (AI) has emerged as a pervasive form of technology that is driving the fourth industrial and likely the fourth educational revolution [1]. AI is a sub-discipline of computer engineering, and refers to technology that imitates intelligent human action because it is able to perceive, reason, act, and interact with the environment [2,3]. Many education ministries and organizations have highlighted the need to educate current learners about AI [4,5,6]. Much attention has been devoted to the use of AI in higher education to address many emerging issues, such as privacy, equity, and risk [7]. Studies on AI curricula for K-12 aim to raise students’ AI knowledge, skills, and attitude have been reported; however, research in this area is still in the early stage [8,9]. Zawacki-Richter et al.’s review [10] indicates that research on the factors associated with school AI education is very much lacking. While it seems clear that the current K-12 students will join workplaces and tertiary institutions that are AI-infused and that they would benefit from being equipped with good AI competencies [9,11,12,13], how AI curricula should be designed needs to be verified empirically [12,13].
Current efforts to design AI school curricula, as expected in this early stage of research, necessarily involve much discussion on the curriculum content to be taught. In this aspect, there seems to be an emerging consensus among current teaching resources and curriculum guidelines [14,15,16,17] that an AI curriculum needs to cover the following:
(a) Basic knowledge and concepts (e.g., machine learning, neural networks, etc.);
(b) Computer perceptions through sensors (e.g., image and sound recognition);
(c) Data representation, modeling, and reasoning (e.g., prediction and suggestion);
(d) Human–computer interactions (e.g., natural language processing);
(e) Social–ethical issues (e.g., privacy issues).
However, how to design an effective school curriculum is unclear at this early stage. An effective curriculum should motivate all students to learn and continue to learn AI; therefore, behavioral intention (BI) to learn AI is crucial for this new school curriculum. Several empirical studies based on the theory of planned behavior (TPB) have been conducted in mainland China and the Hong Kong special administrative region to investigate whether an implemented AI curriculum fosters students’ intention to learn AI with precedent psychological factors such as AI knowledge, anxiety, and relevance, confidence to learn AI, and AI for social good [8,9]. These studies have attested to the general efficacy of the existing AI curriculum in the pre-post surveys and have identified some relevant predictors of students’ BI to learn AI. They can be further enriched by exploring other factors from a new perspective, that is, innate needs (i.e., autonomy, competence, and relatedness) in self-determination theory (SDT). SDT is a motivation theory [18]. Autonomy (feeling self-governing) accounts for the formation and preservation of motivation, and it has been shown to be a constitutive factor along with two other factors in SDT; competence (feeling capable) and relatedness (feeling connected; to predict learning intention) [19,20,21]. As AI technology is constantly advancing, students need to update their AI knowledge. Therefore, fostering students’ long-term commitment to learning AI is essential in designing and delivering an AI school curriculum. This commitment to learning is explained by motivation and engagement fueled by needs satisfaction in SDT [19,20]. Accordingly, structuring the AI curriculum in ways that satisfy students’ needs is likely to be important in shaping students’ continuous intention to learn AI. Hence, TPB and SDT are used together in this study to explain students’ behavioral intention to learn AI.
In summary, AI education is relatively new to most educational researchers, policymakers, and practitioners. How best to design and deliver an AI school curriculum that fosters students’ behavioral intention to learn continuously remains unclear. Accordingly, the present study aims to examine the factors in determining secondary school students’ behavioral intention to learn AI. In order to achieve this goal, the following section discusses the existing literature related to TPB and SDT. In addition, we proposed 12 hypotheses and a research model of behavioral intention to learn AI. This study might contribute to the current understanding of the complex AI learning processes through three intricately interwoven factors: behavioral control, background, and attitudinal factors (e.g., AI knowledge and programming skills as the competency-oriented behavioral control factors for students to solve societal problems with AI; learning resources as autonomy supporting background factors, and social good as the relational/attitudinal factor for learning AI with the consideration of social impacts). The model was established, and its empirical findings may provide insights for understanding how to design an effective AI curriculum. The research questions formulated were as follows:
RQ1: Is the adapted survey a valid and reliable survey?
RQ2: Are the 12 proposed hypotheses supported?

2. Literature Review

With the rapid development of AI, its application not only involved in various aspects of people’s life at any time and from anywhere but also influence the production field, such as financial risk control, medical and health big data and auxiliary diagnosis and treatment, portrait, vehicle automatic identification, and traffic planning [1]. In view of the significant changes that the advancements in AI technologies are likely to bring, understanding learners’ responses to an AI curriculum will facilitate its further refinement and redevelopment. This is congruent with Ajzen’s [22] suggestion that understanding responses to emerging technologies could facilitate further efforts to help people employ them. To promote the positive effects of AI in human life, the popularization of AI education is becoming increasingly important [1]. Designing and implementing AI courses for primary and secondary schools can equip students with basic AI knowledge, which contributes to the upgrading and popularization of AI technology in the future. In addition, an AI curriculum should be designed in a manner that promotes learner well-being by satisfying their needs for autonomy, competence, and relatedness [18]. In the secondary school curriculums, students may have learned about programming algorithms (e.g., the recursive and backtracking algorithms), natural language recognition algorithms (e.g., decision tree and logistic regression), and simultaneous localization and mapping (SLAM) algorithms [4,23]. SLAM refers to a technology for autonomously constructing a map of an unknown environment while positioning the robot on this map [24]. The algorithm is a fundamental mechanism to tackle the positions autonomously and flexibly, and is available for application in different contexts, such as active cooperation and localizing each other, and cooperating in a 3D mapping service [25]. Students could be expected to learn the SLAM algorithm in robotics.
The following review is structured to introduce the key psychological factors that may undergird a motivational design of the AI curriculum. The factors are contextualized in this study. The associated hypotheses supported by the TPB, SDT, and past studies are also presented.

2.1. Theory of Planned Behavior and AI Education

The TPB proposes that an individual’s actions are preceded by their behavioral intentions [26], which are delineated as “the information or beliefs people possess about the behavior under consideration” [27]. Ajzen [26] highlighted two main categories of beliefs that shape BI: subjective norms denote a person’s interpretation of the social norms for action and personal evaluation of the possible consequences of the action under consideration. The latter was also labeled as attitude towards behavior. People generally have stronger intentions to perform actions that are congruent with social norms, are within one’s control to perform, and are perceived as leading to favorable outcomes [28].
While these two categories of determinants exert different levels of influence on specific behavior, Ajzen [26] highlighted that the formation of subjective norms and attitudes is based on people’s exposure to the necessary information. When AI education is being introduced into the classroom, the implicit subjective norms would be that the school authorities believe that it is desirable for students to learn about AI [29]. The AI curriculum would equip students with AI knowledge that would be foundational to the formation of their BI to learn AI. In general, the TPB depicts BI as being influenced by socially oriented subjective norms and personally oriented attitudes, which are associated with individuals’ knowledge, skills, and abilities [26,27,30].
Ajzen [22,31] further relegated factors associated with knowledge, skills, and facilitative contextual factors into a category labeled as perceived behavioral control, which denotes people’s “perceived ease and difficulty of performing the behavior” [31]. This categorical factor was added to account for the need to consider if the behavior is actionable given the individuals’ competencies and the environmental facilitating conditions. Logically, when individuals feel that they are able to perform an action and there are existing supporting conditions, it is likely that they will have a stronger intention to perform the action. Conceptually, the internal factor of knowledge and skills is closely associated with self-efficacy, while the external environment is sometimes measured as facilitating conditions [22]. In the context of learning, available learning resources (e.g., videos, text, diagrams, and mobile e-learning courses about AI) that meet students’ learning needs for AI would be the environmental conditions that facilitate the acquisition of knowledge and skills pertaining to AI. Learning resources would strengthen students’ BI to learn AI (see Figure 1, H1–H3) [32]. In the context of learning AI, applications such as machine translation, programming environments (e.g., Python), and programmable robots are instrumental to students’ learning. They are the tools for understanding and creating intelligent machines and affect how learners would perceive their ability to learn.
In addition, perceived behavioral control necessarily involves people assessing whether they have knowledge of the actions and skills required to perform the actions. Hence, students’ perceptions of their knowledge of AI and their programming skills to create intelligent machines should be considered as predicting factors of their future intention to learn AI. Past research has identified AI knowledge (AIK), which was used to assess students’ perceptions of their understanding of AI and their general ability to use it in daily life [8]. AIK also was regarded as an indirect factor in students’ intention to learn AI, but their programming efficacy (PE) has not been considered [8,29]. Programming efficacy is defined as the perceived ease or difficulty of applying technical programming skills [8]. Studies have indicated that learning AI could foster programming skills [33,34]. AI applications could be the context for the programming skills to be used for the specific purpose, and we hence hypothesize that it predicts students’ PE (i.e., H4). Therefore, students’ AIK should be positively associated with students’ PE (H4) since the foundational concepts, such as the various types of machine learning, provide the orienteering framework for students to write programs. Moreover, AIK can also inform students about the problems it can address, and past studies have shown that it would predict students’ beliefs about using AI for social good and/or perceived usefulness of AI as attitudinal factors (i.e., H5) [8,29]. AI for social good is described as the design, development, and deployment of sustainable AI systems that reduce adverse impacts on the well-being of human life [3]. In summary, AIK and PE account for the knowledge and skill components of perceived behavioral control, and logically, they should both predict students’ BI. Previous studies, however, have determined that direct positive associations between AIK and BI were not established [8,29]. Hence, only H9 (PE predicts BI) was formulated, which has not been previously established. Chai et al. [8,29] explained that the lack of direct association between AIK and BI indicates that students need to understand what AI can be used for before they can form the intention to learn AI. Previous studies hence highlighted the importance of attitude towards employing AI for social good as an indicator and provided the grounds for formulating H5 (AIK Social Good) and H11 (Social Good BI).
The notion of perceived usefulness could also be interpreted from a personal development perspective instead of a social perspective. Hence, this study considers individuals’ innate need to investigate students’ behavioral intention to learn AI by introducing the factor of autonomy. Autonomy is derived from Standage et al. [35] and refers to students’ perceptions of how learning AI can assist them in becoming autonomous, indicating the formation and retention of motivation. Behavioral intention to learn AI is conceptualized as the information or beliefs people possess about the behavior of learning AI under consideration [8]. When individuals understand that learning AI is useful for personal development and their autonomy, their attitude toward learning AI will be strengthened. Such understanding is inevitably derived from acquiring knowledge about AI (i.e., AIK). Based on the above speculations, this study might contribute to exploring the relationship between AIK and autonomy, thus proposing the hypothesis that AIK should be positively associated with autonomy (H6). In the current context, many students have already experienced the positive effects of technologies that support their autonomy in learning (e.g., [36]). AI technologies such as using chatbots and online translation are cases in point. Yuan et al. [37] indicated that the benefit of AI chatbots lies in promoting students’ self-learning without an instructor in mobile interactive courses. As perceived usefulness is positively associated with students’ BI [29], it is hypothesized that autonomy as a form of perceived usefulness would be positively associated with BI (H12).
For the learning of AI to arrive at the stage where students can design some applications that can benefit others, students would need to acquire competencies in programming. The competence of programming is constituted of several sub-categories of skills, including coding algorithms, logical thinking, and debugging [38]. When students believe that they are able to program AI applications, they would logically be more inclined to believe that they are more able to lead an autonomous life in the AI age (H7), have the potential to promote social good with AI (H8), and can continue to learn more about AI (H9) [8].
Lastly, using AI for social good could denote an alternative form of subjective norms since it is an accepted norm that education should enable one to do good in society [39,40]. In addition, the emerging concern of using AI for social good is widely shared and strongly emphasized by AI and computer science educators [8,41,42]. Since both the education fraternity and AI discipline are expected the learning of AI to contribute to social good, AI for social good (SG) can be understood as a form of social norms that could contribute to students’ BI (i.e., H11). Past studies have indicated that social good is positively associated with BI [8,29].

2.2. Self-Determination Theory (SDT) and AI Education

Student’s behavioral intention is strongly associated with motivation which can be explained by SDT. SDT is an organismic theory that recognizes that the inherent motivation that all humans possess is oriented toward personal growth and well-being through learning, mastery, and connection with others [18]. In order to achieve growth and a strong sense of well-being, individuals need to be supported in their autonomy to choose, in competence building towards success, and in relatedness with others. These three needs are universal and basic psychological needs for all humans [18]. Research has shown that the satisfaction of these needs promotes many different contextual interpretations of well-being [43]. For example, satisfaction with autonomy, competence, and relatedness in digital environments has been shown to account for school students’ behavioral engagement as BI to learn online and suggested that the provided learning resources relate to these three needs [19,20]. Roca and Gagne [44] reported that perceived competence, autonomy, and relatedness were positively associated with perceived usefulness, which in turn was positively associated with students’ continuous intention to be engaged in e-learning. Similarly, Nikou and Economides [45] combined SDT with the technology acceptance model, which is derived from TPB, and found that competence, autonomy, and relatedness could predict users’ BI to use mobile-based assessment for tests. Given the general findings of SDT, individuals are likely to be motivated to grow through learning when these three basic needs are supported in the educational context. This could be interpreted as possessing strong BI to learn, as the intention is an indication of motivation [46].
However, using SDT to explain behavioral intention to learn has apparently been less studied. Wang et al. [47] investigated information system developers’ intentions to learn business skills based on intrinsic and extrinsic motivation undergirded by Ryan and Deci’s [46] SDT. They reported that supportive work conditions and learning self-efficacy could shape the system developers’ intention to learn. Moreover, learning resources represent a manifestation of supportive work conditions, and it is reasonable to infer from this study that learning resources should reinforce programming efficacy (H2). These studies provide general support for the hypotheses in Figure 1. In particular, Wang et al.’s study [47] on the influences of work conditions provides support for H1–H3.
Competence in contemporary society that is technology-driven usually implies having knowledge and technical skills (i.e., AIK and PE) to perform a particular technology-based task. For example, with regard to the use of the internet, scholars have distinguished knowledge about information and internet skills [48]. Kim and Yang’s regression [48] analysis indicated that information literacy was a significant predictor of students’ interest in political affairs and social issues but not students’ internet skills. This may be considered as supporting H5 and H6. Moreover, knowledge about media predicts teenagers’ critical thinking when they peruse social media [49], which may parallel AIK predicting programming efficacy that necessarily involves complex computational thinking (i.e., H4). In addition, path analyses among teachers indicated that their self-efficacy predicted their work engagement, autonomy, and job satisfaction [50]. A similar relationship could be hypothesized between students’ programming efficacy and autonomy and BI to learn (H7 and H9). As programming intelligent machines are the means for students to achieve social good for the context of this study (see Section 3), this study hypothesized that programming efficacy would predict students’ sense of relating well to others in society (H8). AI projects, such as creating a mobile application to help visually impaired people to use the camera of their smartphones to read the price tags and descriptions of products in a supermarket, is a concrete pedagogical example. Nonetheless, the interrelationships between competence-associated variables with autonomy and relatedness need to be further investigated in the context of AI education. A previous study indicated that elementary school students’ efficacy in learning AI was positively associated with their beliefs about using AI to achieve social good [8]. In addition, self-efficacy has been found to predict social/emotional engagement [51,52]. This provides general support that PE could predict students’ commitment to using AI for social good (H8).
Autonomy has been highlighted as being crucial to the individual’s learning engagement and wellness [18]. In most SDT studies, autonomy is studied in comparison with control [46]. While autonomy is undergirded by intrinsic motivation, control is about the use of rewards to influence motivation and action. There is strong evidence that control undermines autonomy and intrinsic motivation. More importantly, the need for autonomy is inherent, and when it is obstructed, a negative impact on motivation can be expected [46]. Hence, whether one perceives a given condition as promoting/compromising one’s autonomy will directly influence one’s motivation. Lai’s review [36] indicates how technology supports learner autonomy for formal and informal language learning. While AI technologies can be designed to replace mundane work, current suggestions are conceptualizing AI as supporting human autonomy (e.g., Toyoda et al. [53]). As proposed by SDT, realizing that AI technologies can support one’s autonomy should promote one’s motivation to learn about AI (i.e., H12).
Relatedness is often discussed in SDT studies as the cohesive and supportive classroom social environment [54]. However, Autin et al. [55] argued that within the SDT framework, relatedness is partly constituted by the need to contribute to society. In other words, relatedness is not always just about how one feels about others’ relationships with oneself, but it could be how one wants to relate to others. Autin et al. [55] validated an instrument that includes the need to contribute as a subscale of relatedness. In addition, a longitudinal SDT-based study has indicated that social well-being is more likely to contribute to subjective well-being and not the other way around. Social well-being is formed by having positive relationships with society, including contributing to the common good [56]. These studies provide the support that students’ inclination to use AI for SG could contribute to their sense of well-being, which may include feeling empowered by AI technologies to be more autonomous and being willing to learn more about AI (i.e., H10 and H11). Students’ SG has been reported as being positively associated with their BI to learn AI [8,29], but its relationship with autonomy has not been reported.

2.3. This Study

This study proposed the 12 hypotheses stated in the review, forming the research model shown in Figure 1. In the model, learning resources are seen as the background contextual factor; AIK and programming efficacy are seen as perceived behavioral control factors, and social control and autonomy are seen as attitudinal factors. To measure the above model, we examined the reliability and validity of the proposed instruments discussed in Section 3 and then used structural equation modeling to test all the proposed hypotheses before examining instrument validation.

3. Method

3.1. Curriculum and Participants

This study adopted a purposive sampling strategy to invite secondary school students who were learning about AI. Considering the significance of participants’ prior knowledge and experience in influencing their perceptions, secondary school students who had acquired knowledge of AI were selected for this study to explore their behavioral intentions in learning AI better [57]. The invited students were from schools located in Shandong, Zhejiang, and Shanghai. The students had been learning the AI curriculum for at least one school semester. The AI curriculums in these schools were implemented according to the same AI curriculum standards. The students were instructed by teachers who were trained by the Department of Education in China to meet the teaching requirements for conducting AI education. The curriculum involved teaching foundational concepts about AI through lectures with daily examples (e.g., the definition of AI, various types of machine learning, and the ethical issues related to AI applications) and providing relevant hands-on experiences using various AI applications (e.g., facial recognition). The students also learned about programming algorithms (e.g., the recursive and backtracking algorithms) and natural language recognition algorithms (e.g., decision trees and logistic regression).
In addition, students were introduced to the engineering design process in the context of a conceptual design project (building a machine to cook stewed pork). The design process involved understanding the problem, designing solutions for the problem, creating a prototype, and testing and evaluating the prototype. Designing a robotic car to execute the fire extinguishing actions was the culminating project for the AI curriculum. The students program the robotic car to autonomously survey a room and extinguish any fire detected. In order to complete the project, the students were provided with supersonic and infrared sensors. Tutors with engineering backgrounds were engaged to coach students to complete the project. The time allocated for the curriculum was 40 h. The students were guided to think about suitable strategies for problem-solving when they face specific tasks and realize specific functions by combining existing algorithm modules. Specifically, students learned the SLAM algorithm of robots to recognize the optimal path independently in an unfamiliar environment (e.g., the roadblock-avoiding AI autonomous cars). The project highlighted how intelligent machines promote social good by using intelligent firefighting robots to save human life.
After the completion of the curriculum, the students were invited to complete the survey. The invitation was conveyed through the students’ teachers, and participation was voluntary. Finally, 509 students completed the survey, of whom 190 (37.3%) were female. The distribution of their grade levels was 171 in G7 (33.6%), 208 in G8 (40.9%), and 130 in G9 (25.5%). The mean age was 14.24 years (SD = 1.24).

3.2. Instruments

The survey instruments were created by adopting and adapting items and factors from past research. The eight chosen factors were learning resources, autonomy, AI literacy, AI for social good, behavioral intention to learn AI, logical thinking, algorithm, and debugging. The items were scored on a 6-point Likert scale from 1 (strongly disagree) to 6 (strongly agree). Most of the items were adopted from Chai et al. [29] and Tsai et al. [38] and had been tested in studies with the same participant backgrounds. The former study validated the AI-related factors (AIK, SG, BI), and the latter validated the programming efficacy items denoted by DB, Log, and ALG. Moreover, autonomy (AU) was tested in studies with Chinese secondary school students [19,20]. Accordingly, all the instruments were expected to be suitable for our participants.
Demographic data were collected in the first part of the survey. The data included gender, age, grade level, school, and class. The information on school and class was collected to collate the data for the compilation of class reports for the teachers who supported this study. The second part solicits responses from the students based on the following factors.
Learning resources (LR) (3 items) denote the AI learning resources that the school and teachers provide for students to facilitate students’ learning about AI. It was adapted from the subscale of assessment tools for teaching within Lin et al.’s [32] conception of the mobile learning survey. A sample item is “The web-based AI resources can improve my learning efficiency”. The reported alpha reliability was 0.88.
Autonomy (AU) (4 items) assesses students’ perceptions of how learning about AI can help them possess a higher capacity to be autonomous. The items were adapted from previous work conducted in the context of British children of Standage et al. [35], with acceptable internal reliability (α = 0.80). The items were modified to fit this study. A sample item is “Learning AI helps me feel that I can decide whom I want to be”.
AI knowledge (AIK) (6 items) measures students’ self-rated understanding of AI technologies, that is, knowledge of AI. The items were adopted from Chai et al. [29]. A sample item is “I know how AI can be used to predict possible outcomes through statistics”. The reported alpha reliability was 0.90.
AI for social good (SG) (5 items) assesses students’ beliefs in using AI knowledge to improve human lives. The items were adopted from Chai et al. [8]. The alpha reliability was reported to be 0.92. One sample item is “I wish to use my AI knowledge to serve others”.
Behavioral intention to learn AI (BI) (4 items) measures students’ intention to continue to learn about AI. The items were adopted from Chai et al. [29]. The reported alpha reliability was 0.91. A sample item is “I will continue to learn AI technology in the future”.
The programming efficacy, which focuses on technical programming skills, is different from AIK. Items for this measurement were adopted from the Computer Programming Self-Efficacy Scale conducted by Tsai et al. [38]. The scale was validated with a group of college students. Three sub-scales that represent core programming skills were chosen for this study, namely logical thinking, algorithm, and debugging. The two other sub-scales not selected were the control and cooperation sub-scales. The control sub-scale is about to open, save, and run scripts in the program editor, which is a lower-order scale that can be subsumed under the three selected scales. The cooperation sub-scale refers to the social division of the computing task, which is a good social practice of computing but not a must-have. We assessed both sub-scales as non-essential. To focus on the core skills and to avoid an overly lengthy survey, they were not included.
Logical thinking (Log) (4 items) measures students’ self-assessments of their capability to use logical conditions to write computer programs. A sample item is “I can understand a conditional expression such as ‘if…else…’” The reported alpha reliability was 0.96.
Algorithm (ALG) (4 items) assesses students’ perceived ability to independently produce an algorithm to solve a programming problem. A sample item is “I can make use of programming to solve a problem”. The reported alpha reliability was 0.92.
Debug (DB) (3 items) measures students’ perceived capability to rectify errors in their written programs. A sample item is “I can find the origin of an error while testing a program”. The reported alpha reliability was 0.94.
The compiled survey was subjected to review by three professors with expertise in educational technology. Subsequently, two teachers from the participating schools reviewed the survey. This step ascertained the face validity of the survey.

3.3. Data Analysis

The data analyses were conducted in three phases. First, 165 participants were randomly chosen for the exploratory factor analyses (EFA) using SPSS 25. This was based on the sampling ratio of one item to five participants. Principal axis factoring analysis and the direct oblimin rotation method were applied to extract the factors. Items with cross-loadings or factor loadings of <0.5 were omitted. Alpha reliabilities were computed for all factors. Subsequent analyses pertaining to establishing convergent and divergent validity were conducted (i.e., the computation of average variance extracted, composite reliabilities, correlation analysis, and discriminant index). Second, the remaining sample (N = 344) was subjected to confirmatory factor analyses. Finally, the total sample (N = 509) was used to test the hypotheses with structural equation modeling using AMOS 25. SEM analysis is widely used in educational research, as it can examine predictive or causal relationships among observed variables and latent variables [58].
In this stage, programming efficacy was inserted as a second-order factor to subsume the factors Log, DB, and ALG. This arrangement was necessary to provide a comprehensible and parsimonious model to examine how programming efficacy was related to other factors. Programming is a complex skill set [38] that is represented by the validated subscales we selected. However, formulating hypotheses based on the subscales of other factors (i.e., AU, SG, and BI) was obviously unsound. For example, hypotheses such as debugging would predict BI, or logical thinking would predict social good seemed strange. Hence, the decision to treat programming efficacy as a higher-order factor was made, and it was supported statistically (see the results).
To validate that the data were normally distributed for the statistical analyses, the range of skewness and kurtosis were examined. The ranges were −0.520 to 0.330 and −1.068 to 0.13, respectively. Mardia’s coefficient was used to assess multivariate normality. Based on Raykov and Marcoulide’s [59] recommendation, the multivariate coefficient value was 73.957, which was less than the required maximum of 32 × 34 = 1088 (32 was the number of items used for confirmatory factor analysis (CFA)). These examinations showed that the data satisfied the requirement for further statistical analyses.

4. Results

4.1. Exploratory Factor Analysis

A total of 165 participants were randomly selected for the exploratory factor analysis (EFA) employing principal axis factoring with direct oblimin rotation. The Kaiser-Meyer-Olkin value was 0.814, and the value for Bartlett’s test of sphericity was significant with p < 0.001. These indexes supported that the sample was sufficient for EFA. The factor analysis yielded eight factors with an eigenvalue larger than one. The total variance explained was 72.9%. One item from autonomy was deleted because of an insufficient factor loading. The other 32 items were retained. The overall alpha reliability was 0.88. Table 1 provides the mean, standardized deviation, factor loadings, alpha reliabilities, average variance extracted (AVE), and composite reliabilities (CR).
Overall, the EFA attests that the survey was valid and reliable. In addition, the CR of all assessed factors was greater than 0.70, and the AVE was above the value of 0.50. The convergent validity of the survey was thus acceptable (see Hair et al., [60]). Discriminant indexes (see Table 2) were computed based on the AVEs.

4.2. Correlations and Discriminant Indexes

Pearson correlation coefficients were calculated to investigate the relationships among the five factors. As noted in Table 2, most factors were significantly and positively correlated. The discriminant indexes show that the survey possesses discriminant validity.

4.3. Confirmatory Factor Analysis of the Measurement Model

The CFA (n = 344) provided further confirmation of the construct validity of the measurement model. The model obtained fit indexes that demonstrated good fit: χ2/df = 1.217 (<3.0), p < 0.001, RMSEA = 0.025 (<0.08), GFI = 0.91 (>0.90), TLI = 0.98 (>0.90), and CFI = 0.98 (>0.90) (Hair et al. [51]). The standardized estimate for the items ranges from 0.63 to 0.85, and all t-values were significant (p < 0.001). Thus, the survey instrument can be considered as having good construct validity based on the CFA results.

4.4. SEM for Hypothesis Testing

The 12 hypotheses were tested through SEM. The model fit indexes were: χ2/df = 1.47 (<3.0), p < 0.001, GFI = 0.93 (>0.90), TLI = 0.97 (>0.90), CFI = 0.97 (>0.90), and RMSEA = 0.030 (<0.07), which indicated that the model was a good fit (Hair et al., [60]). As reflected in Table 3, 10 out of the 12 hypotheses were supported. In order to avoid type 1 error, the cut-off value of p was computed to be 0.004 (0.05 divided by 12). The three hypotheses about learning resources (LR) indicated that they supported AIK, PE, and BI. AIK was positively associated with PE and AU, and PE was positively associated with SG and BI. Social good (SG) was positively associated with AU and BI, and AU was positively associated with BI. The findings indicated that the hypothesized model was largely supported. However, this study also found unexpected results that AIK did not significantly predict SG, and PE showed a negative association toward AU.

5. Discussion

Given the lack of research on AI education in primary and secondary school classrooms [10], this study identified eight factors undergirded by the TPB and SDT to hypothesize how students’ behavioral intentions to learn AI could be formed. The factors were validated with good reliabilities. Hence, RQ1 was answered adequately. In addition, the research model was supported by 10 out of 12 supported hypotheses. The findings were generally congruent with past studies undergirded by the TPB [8,29]. In response to RQ2, except for two unexpected findings, it could be concluded that the background contextual factor (i.e., LR) significantly predicted the perceived behavioral control (i.e., AIK and PE) and attitudinal (i.e., SG and AU) factors predict student behavioral intention to learn AI.
This study revealed to what extent the background context (i.e., LR) predicted behavioral control factors (i.e., AIK and PE) and student behavioral intention to learn AI. Student learning of AI is dependent on teacher competence in teaching AI, and this is an identified challenge because most teachers have not received extensive and formal AI training, and they may lack both the discipline and the pedagogical knowledge [12]. As such, providing good learning resources as the background contextual factor is deemed important. This study has identified the background contextual factor (i.e., learning resources) as being positively associated with behavioral control factors (i.e., AI knowledge and programming efficacy) and students’ intention to learn AI (see Figure 2, H1–H3). Compared to other traditional disciplines, such as mathematics and programming, AI is emerging and comprises many abstract concepts, such as cloud computing and machine learning [9,12,16,61]. The finding of this study suggested that learning resources should be designed effectively to provide AI-mediated experiences so that students can concretely grasp the concepts. Such learning resources can contribute to the effectiveness of students’ knowledge transformation by providing them with multiple sources to guide them to engage deeply in the content of the resources rather than copying information [62]. This would include examples such as using diagrams to show how cloud computing works and using animated analogies to explain how machine learning processes data. Other examples include using existing AI-based applications to learn how AI works and developing a chatbot to learn how AI processes textual data. Fortunately, such applications and platforms are increasingly being created. Our findings indicated that teachers or curriculum designers should choose available applications such as Google translation and teachable machines to foster students’ intention to learn about AI.
The relationship between behavioral control factors (i.e., AIK and PE) and attitudinal factors (i.e., SG and AU) was present. This study added programming efficacy comprised of logical thinking, algorithm, and debugging subscales [38]. This study is hence different from previous studies [8,29], and it examined how programming efficacy relates to the intention to learn AI. The previous studies have assessed students’ self-efficacy in learning AI as a factor representing the control belief within the TPB. While both forms of efficacy are positively associated with the students’ behavioral intention (H9), programming efficacy seemed to change some relationships. For example, the replacement of learning efficacy with programming efficacy seems to diminish the regression weight of AI knowledge with reference to social good. While a previous study affirmed that AI knowledge was positively associated with social good [8,29], this relationship was not supported in this study (H5). Instead, it seemed that with programming efficacy, AI knowledge was positively associated with programming efficacy (H4), which in turn was positively associated with social good (H8). The finding indicated that programming efficacy could be deemed by the students as the means to achieve social good. The inclusion of programming efficacy has thus further illuminated that it is important to incorporate programming into AI education for secondary school students as it is positively associated with social good and students’ intention to learn. Another factor that could influence students’ intention to learn AI is autonomy, as depicted in SDT [18].
The findings of this study indicated the correlation between attitudinal factors (i.e., SG and AU) and students’ behavioral intention to learn AI. As behavioral intention accounts significantly for actual behaviors [27], this study has provided educators with empirical evidence pertaining to promoting students’ ownership of the design of the AI curriculum. Nonetheless, as both TPB and SDT are grand theories that afford many possible contextualized interpretations, there may be other equally important or theoretically more meaningful factors to be investigated. This study may therefore form the basis for future investigations into the psychology of learning AI. Autonomy had not been previously investigated in this context. In this study, it was treated as a factor that denotes a positive attitude towards behavior undergirded by the TPB [31] and a need to be satisfied as postulated by the SDT [46]. The attitudinal factor of autonomy evaluates students’ beliefs about the positive consequence of learning AI as satisfying one’s future needs to be a person empowered with more choice and freedom. The finding indicates that students who hold such beliefs have a stronger intention to engage in AI learning (H12). Students’ autonomy is shaped by social good (H10), which is consistent with Joshanloo et al.’s finding [56]. It seems that the beliefs about using AI for social good help foster a stronger sense of autonomy, which could be interpreted as students’ sense of autonomy being partly constituted by their ability to help others. It implies that contributing to others [19,20,55] as a form of relatedness fosters one’s autonomy. Lastly, the supported hypotheses, H11, indicate the importance of learning AI for social good that enhances students’ continuous intention to learn AI.
The present findings have important implications for promoting students’ behavioral intentions to learn AI. While new factors (PE and AU) replaced learning self-efficacy and perceived usefulness in representing control beliefs and attitudes towards behavior, the findings still support that the TPB is useful in terms of explaining students’ BI to learn AI. The study also provides support that it may be beneficial to understand students’ intention to learn AI from the perspective of SDT. While the TPB focuses on understanding the intention to act from the three categorical antecedents (i.e., social norms, personal attitude, and control beliefs) [22], the SDT is focused on fostering motivation by supporting the needs for autonomy, competence, and relatedness [18]. The perspectival differences are apparently complementary in that one accounts for the phenomenon more broadly from the social psychology perspective, while the other is from the individualistic organismic perspective. Both perspectives are necessary for education matters since education institutes need to promote individual well-being and be socially responsive. Undergirded by the TPB and SDT, this study has identified a synthetic theoretical model for studying the motivational effects of the AI curriculum.
In addition, the study might contribute to AI education by first identifying relevant factors that could be employed to assess the effectiveness of an AI curriculum. These factors include AI knowledge and AI for social good, which were identified based on emerging curriculum emphases for AI education [9,17,41]. In addition, this study added programming efficacy comprised of logical thinking, algorithm, and debugging subscales [38]. Programming skills are a means of creating intelligent machines and actualizing social good. This study identified that it is an important factor that significantly shapes students’ intention to learn AI. Our findings thus support that secondary school students should be equipped with some programming skills before learning AI.

6. Conclusions

In conclusion, AI has begun to change how we learn, live, and work, and its demands for related pedagogical responses from the education sector are loud and clear [1,11,12]. This study analyzed students’ perceptions associated with the learning of AI from the TPB and SDT perspectives and highlighted essential factors that shape students’ motivation to learn AI. The research model should help to inform K-12 educators and education researchers about how to account for the psychological framing of the AI curriculum.
This study was limited by its design in several ways. First, this study was limited to secondary school students in various parts of China in which the schools are highly supportive of AI education. Future research needs to further examine other regions of the world where the social norms and framing of AI education are different. Second, this study did not measure the actual achievement of learning AI after the course completion. For future studies, it may be helpful to construct AI literacy tests to explore students’ achievement. Third, while the factors in this study meet the generally accepted minimum number of three items for a factor, it is suggested that future studies may consider using more items for each factor to ensure wider coverage of the factors’ theoretical domain. Finally, programming efficacy was originally developed for a group of college students, so the instruments may not be suitable for secondary school students who have less understanding of programming. It is recommended that future studies to develop a one-construct instrument (that is thus less complicated) be conducted.

Author Contributions

C.S.C.: conceptualization, investigation, software, data curation, writing—original draft, writing—review and editing. T.K.F.C.: methodology, data curation, writing—review. X.W.: data analysis, software, instruction. F.J.: data analysis, software, instruction. X.-F.L.: conceptualization, methodology, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China grant number 62007010.

Institutional Review Board Statement

All subjects gave their informed consent for inclusion before they participated in the study. The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee of the South China Normal University (IRB-SCNU-EIT-2021-016).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Not applicable.

Acknowledgments

The authors express their gratitude to the anonymous reviewers and editors for their helpful comments about this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Seldon, A.; Abidoye, O. The Fourth Education Revolution; Legend Press Ltd.: London, UK, 2018. [Google Scholar]
  2. Lin, P.-Y.; Chai, C.-S.; Jong, M.S.-Y.; Dai, Y.; Guo, Y.; Qin, J. Modeling the structural relationship among primary students’ motivation to learn artificial intelligence. Comput. Educ. Artif. Intell. 2021, 2, 100006. [Google Scholar] [CrossRef]
  3. Chai, C.S.; Teo, T.; Huang, F.; Chiu, T.K.F.; Xing, W. Secondary school students’ intentions to learn AI: Testing moderation effects of readiness, social good and optimism. Educ. Technol. Res. Dev. 2022, 70, 765–782. [Google Scholar] [CrossRef]
  4. European Commission. White Paper on Artificial Intelligence: A European Approach to Excellence and Trust. The Official Website of the European Union. Available online: https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en (accessed on 2 February 2020).
  5. Knox, J. Artificial intelligence and education in China. Learn. Media Technol. 2020, 45, 298–311. [Google Scholar] [CrossRef]
  6. OECD. Artificial Intelligence in Society; OECD Publishing: Paris, France, 2019. [Google Scholar] [CrossRef]
  7. Pelletier, K.; Brown, M.; Brooks, D.C.; McCormack, M.; Reeves, J.; Arbino, N.; Bozkurt, A.; Crawford, S.; Czerniewicz, L.; Gibson, R.; et al. 2021 Educause Horizon Report, Teaching and Learning Edition; Educause: Boulder, CO, USA, 2021. [Google Scholar]
  8. Chai, C.S.; Lin, P.-Y.; Jong, M.S.-Y.; Dai, Y.; Chiu, T.K.F.; Qin, J. Perceptions of and behavioral intentions towards learning artificial intelligence in primary school students. Educ. Technol. Soc. 2021, 24, 89–101. [Google Scholar]
  9. Chiu, T.K.F.; Meng, H.; Chai, C.S.; King, I.; Wong, S.; Yeung, Y. Creation and evaluation of a pre-tertiary Artificial Intelligence (AI) curriculum. IEEE Trans. Educ. 2022, 65, 30–39. [Google Scholar] [CrossRef]
  10. Zawacki-Richter, O.; Marín, V.I.; Bond, M.; Gouverneur, F. Systematic review of research on artificial intelligence applications in higher education-where are the educators? Int. J. Educ. Technol. High. Educ. 2019, 16, 1–27. [Google Scholar] [CrossRef] [Green Version]
  11. Aoun, J.E. Robot-Proof: Higher Education in the Age of Artificial Intelligence; MIT Press: Cambridge, MA, USA, 2017. [Google Scholar]
  12. Chiu, T.K.F. A holistic approach to Artificial Intelligence (AI) curriculum for K-12 schools. TechTrends 2021, 65, 796–807. [Google Scholar] [CrossRef]
  13. Pedró, F.; Subosa, M.; Rivas, A.; Valverde, P. Artificial Intelligence in Education: Challenges and Opportunities for Sustainable Development; UNESCO: Paris, France, 2019. [Google Scholar]
  14. Payne, B.H. AI + Ethics Curriculum for Middle School; MIT Media Lab: Cambridge, MA, USA, 2019. Available online: https://www.media.mit.edu/projects/ai-ethics-for-middle-school/overview (accessed on 1 January 2022).
  15. Qin, J.J.; Ma, F.G.; Guo, Y.M. Foundations of Artificial Intelligence for Primary School; Popular Science Press: Beijing, China, 2019. [Google Scholar]
  16. Tang, X.; Chen, Y. Fundamentals of Artificial Intelligence; East China Normal University: Shanghai, China, 2018. [Google Scholar]
  17. Touretzky, D.; Gardner-McCune, C.; Breazeal, C.; Martin, F.; Seehorn, D. A Year in K-12 AI Education. AI Mag. 2019, 40, 88–90. [Google Scholar] [CrossRef]
  18. Ryan, R.M.; Deci, E.L. Intrinsic and extrinsic motivation from a self-determination theory perspective: Definitions, theory, practices, and future directions. Contemp. Educ. Psychol. 2020, 61, 1–11. [Google Scholar] [CrossRef]
  19. Chiu, T.K.F. Digital support for student engagement in blended Learning based on Self-determination Theory. Comput. Hum. Behav. 2021, 124, 106909. [Google Scholar] [CrossRef]
  20. Chiu, T.K.F. Applying the Self-determination Theory (SDT) to explain student engagement in online learning during the COVID-19 pandemic. J. Res. Technol. Educ. 2022, 54 (Suppl. S1), 14–30. [Google Scholar] [CrossRef]
  21. Thongmak, M. A model for enhancing employees’ lifelong learning intention online. Learn. Motiv. 2021, 75, 101733. [Google Scholar] [CrossRef]
  22. Ajzen, I. The theory of planned behavior: Frequently asked questions. Hum. Behav. Emerg. Technol. 2020, 2, 314–324. [Google Scholar] [CrossRef]
  23. Jiang, B.; Zhao, W.; Zhang, N.; Qiu, F. Programming trajectories analytics in block-based programming language learning. Interact. Learn. Environ. 2022, 30, 113–126. [Google Scholar] [CrossRef]
  24. de Souza Muñoz, M.E.; Chaves Menezes, M.; Pignaton de Freitas, E.; Cheng, S.; de Almeida Ribeiro, P.R.; de Almeida Neto, A.; Muniz de Oliveira, A.C. xRatSLAM: An Extensible RatSLAM Computational Frame-work. Sensors 2022, 22, 8305. [Google Scholar] [CrossRef] [PubMed]
  25. Pei, Z.; Piao, S.; Souidi, M.E.H.; Qadir, M.Z.; Li, G. SLAM for Humanoid Multi-Robot Active Cooperation Based on Relative Observation. Sustainability 2018, 10, 2946. [Google Scholar] [CrossRef] [Green Version]
  26. Ajzen, I. From Intentions to Actions: A Theory of Planned Behavior. In Action Control; Kuhl, J., Beckmann, J., Eds.; Springer: Berlin, Germany, 1985; pp. 11–39. [Google Scholar]
  27. Fishbein, M.; Ajzen, I. Predicting and Changing Behavior: The Reasoned Action Approach; Psychology Press: London, UK, 2010. [Google Scholar]
  28. Conner, M.; Armitage, C.J. Extending the theory of planned behavior: A review and avenues for further research. J. Appl. Soc. Psychol. 1998, 28, 1424–1429. [Google Scholar] [CrossRef]
  29. Chai, C.S.; Wang, X.; Xu, C. An extended theory of planned behavior for the modelling of Chinese secondary school students’ intention to learn Artificial Intelligence. Mathematics 2020, 8, 2089. [Google Scholar] [CrossRef]
  30. Rana, N.P.; Slade, E.; Kitching, S.; Dwivedi, Y.K. The IT way of loafing in class: Extending the theory of planned behavior (TPB) to understand students’ cyberslacking intentions. Comput. Hum. Behav. 2019, 101, 114–123. [Google Scholar] [CrossRef]
  31. Ajzen, I. Perceived behavioral control, self-efficacy, locus of control, and the theory of planned behavior. J. Appl. Soc. Psychol. 2002, 32, 665–683. [Google Scholar] [CrossRef]
  32. Lin, X.-F.; Deng, C.; Hu, Q.; Tsai, C.C. Chinese undergraduate students’ perceptions of mobile learning: Conceptions, learning profiles, and approaches. J. Comput. Assist. Learn. 2019, 35, 317–333. [Google Scholar] [CrossRef]
  33. Chen, C.; Haduong, P.; Brennan, K.; Sonnert, G.; Sadler, P. The effects of first programming language on college students’ computing attitude and achievement: A comparison of graphical and textual languages. Comput. Sci. Educ. 2019, 29, 23–48. [Google Scholar] [CrossRef]
  34. Martinez-Arellano, G.; Cant, R.; Woods, D. Creating AI characters for fighting games using genetic programming. IEEE Trans. Comput. Intell. AI Games 2016, 9, 423–434. [Google Scholar] [CrossRef] [Green Version]
  35. Standage, M.; Duda, J.L.; Ntoumanis, N. A test of self-determination theory in school physical education. Br. J. Educ. Psychol. 2005, 75, 411–433. [Google Scholar] [CrossRef] [Green Version]
  36. Lai, C. Technology and learner autonomy: An argument in favor of the nexus of formal and informal language learning. Annu. Rev. Appl. Linguist. 2019, 39, 52–58. [Google Scholar] [CrossRef]
  37. Yuan, C.C.; Li, C.H.; Peng, C.C. Development of mobile interactive courses based on an artificial intelligence Chatbot on the communication software LINE. Interact. Learn. Environ. 2021, 1–15. [Google Scholar] [CrossRef]
  38. Tsai, M.J.; Wang, C.Y.; Hsu, P.F. Developing the computer programming self-efficacy scale for computer literacy education. J. Educ. Comput. Res. 2019, 56, 1345–1360. [Google Scholar] [CrossRef]
  39. Duncan, C.; Sankey, D. Two conflicting visions of education and their consilience. Educ. Philos. Theory 2019, 51, 1454–1464. [Google Scholar] [CrossRef]
  40. White, J. The Aims of Education Restated; Routledge: London, UK, 2010. [Google Scholar]
  41. Floridi, L.; Cowls, J.; Beltrametti, M.; Chatila, R.; Chazerand, P.; Dignum, V.; Luetge, C.; Madelin, R.; Pagallo, U.; Rossi, F.; et al. AI4People—An Ethical framework for a Good AI Society: Opportunities, risks, principles, and recommendations. Minds Mach. 2018, 28, 689–707. [Google Scholar] [CrossRef] [Green Version]
  42. Goldweber, M.; Davoli, R.; Little, J.C.; Riedesel, C.; Walker, H.; Cross, G.; Von Konsky, B.R. Enhancing the social issues components in our computing curriculum: Computing for the social good. ACM Inroads 2011, 2, 64–82. [Google Scholar] [CrossRef]
  43. Cerasoli, C.P.; Nicklin, J.M.; Nassrelgrgawi, A.S. Performance, incentives, and needs for autonomy, competence, and relatedness: A meta-analysis. Motiv. Emot. 2016, 40, 781–813. [Google Scholar] [CrossRef]
  44. Roca, J.C.; Gagné, M. Understanding e-learning continuance intention in the workplace: A self-determination theory perspective. Comput. Hum. Behav. 2008, 24, 1585–1604. [Google Scholar] [CrossRef]
  45. Nikou, S.A.; Economides, A.A. Mobile-based assessment: Integrating acceptance and motivational factors into a combined model of self-determination theory and technology acceptance. Comput. Hum. Behav. 2017, 68, 83–95. [Google Scholar] [CrossRef]
  46. Ryan, R.M.; Deci, E.L. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am. Psychol. 2000, 55, 68–78. [Google Scholar] [CrossRef] [PubMed]
  47. Wang, Y.Y.; Lin, T.C.; Tsay, C.H.H. Encouraging IS developers to learn business skills: An examination of the MARS model. Inf. Technol. People 2016, 29, 381–418. [Google Scholar] [CrossRef] [Green Version]
  48. Kim, E.; Yang, S. Internet literacy and digital natives’ civic engagement: Internet skill literacy or Internet information literacy? J. Youth Stud. 2016, 19, 438–456. [Google Scholar] [CrossRef]
  49. Ku, K.Y.L.; Kong, Q.; Song, Y.; Deng, L.; Kang, Y.; Hu, A. What predicts adolescents’ critical thinking about real-life news? The roles of social media news consumption and news media literacy. Think. Ski. Creat. 2019, 33, 100570. [Google Scholar] [CrossRef]
  50. Sokmen, Y.; Kilic, D. The relationship between primary school teachers’ self-efficacy, autonomy, job satisfaction, teacher engagement and burnout: A model development study. Int. J. Res. Educ. Sci. 2019, 5, 709–721. [Google Scholar]
  51. Fu, F.; Liang, Y.; An, Y.; Zhao, F. Self-efficacy and psychological well-being of nursing home residents in China: The mediating role of social engagement. Asia Pac. J. Soc. Work. Dev. 2018, 28, 128–140. [Google Scholar] [CrossRef]
  52. Martin, D.P.; Rimm-Kaufman, S.E. Do student self-efficacy and teacher-student interaction quality contribute to emotional and social engagement in fifth grade math? J. Sch. Psychol. 2015, 53, 359–373. [Google Scholar] [CrossRef] [Green Version]
  53. Toyoda, Y.; Lucas, G.; Gratch, J. The effects of autonomy and ask meaning in algorithmic management of crowdwork. In Proceedings of the 19th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2020), Auckland, New Zealand, 9–13 May 2020; pp. 1404–1412. [Google Scholar]
  54. Trenshaw, K.F.; Revelo, R.A.; Earl, K.A.; Herman, G.L. Using self-determination theory principles to promote engineering students’ intrinsic motivation to learn. Int. J. Eng. Educ. 2016, 32, 1194–1207. [Google Scholar]
  55. Autin, K.L.; Duffy, R.D.; Blustein, D.L.; Gensmer, N.P.; Douglass, R.P.; England, J.W.; Allan, B. A The development and initial validation of the work needs satisfaction scale: Measuring basic needs within the psychology of working theory. J. Couns. Psychol. 2019, 66, 195–209. [Google Scholar] [CrossRef] [PubMed]
  56. Joshanloo, M.; Sirgy, M.J.; Park, J. Directionality of the relationship between social well-being and subjective well-being: Evidence from a 20-year longitudinal study. Qual. Life Res. 2018, 27, 2137–2145. [Google Scholar] [CrossRef] [PubMed]
  57. Holzinger, A.; Searle, G.; Wernbacher, M. The effect of previous exposure to technology on acceptance and its importance in usability and accessibility engineering. Univers. Access Inf. Soc. 2011, 10, 245–260. [Google Scholar] [CrossRef]
  58. Kelloway, E.K. Using LISREL for Structural Equation Modeling: A Researcher’s Guide; Sage: Newbury Park, CA, USA, 1998. [Google Scholar]
  59. Raykov, T.; Marcoulides, G.A. An Introduction to Applied Multivariate Analysis; Taylor & Francis: New York, NY, USA, 2008. [Google Scholar]
  60. Hair, J.F., Jr.; Black, W.C.; Babin, B.J.; Anderson, R.E.; Tatham, R.L. SEM: An Introduction. Multivariate Data Analysis: A Global Perspective; Pearson Education: Upper Saddle River, NJ, USA, 2010; pp. 629–686. [Google Scholar]
  61. Tang, K.Y.; Chang, C.Y.; Hwang, G.J. Trends in artificial intelligence-supported e-learning: A systematic review and co-citation network analysis (1998–2019). Interact. Learn. Environ. 2021, 1–9. [Google Scholar] [CrossRef]
  62. Raković, M.; Winne, P.H.; Marzouk, Z.; Chang, D. Automatic identification of knowledge-transforming content in argument essays developed from multiple sources. J. Comput. Assist. Learn. 2021, 37, 903–924. [Google Scholar] [CrossRef]
Figure 1. Proposed research model.
Figure 1. Proposed research model.
Sustainability 15 00605 g001
Figure 2. Research model of the measured factors. *** p < 0.001, ** p < 0.004.
Figure 2. Research model of the measured factors. *** p < 0.001, ** p < 0.004.
Sustainability 15 00605 g002
Table 1. Exploratory factor analysis results for intention to learn AI.
Table 1. Exploratory factor analysis results for intention to learn AI.
FactorsItemsFactor LoadingsMeanSDαAVECR
Social GoodSG2 0.8273.590.940.920.610.88
SG1 0.810
SG5 0.758
SG3 0.749
SG4 0.742
DebuggingDB1 0.8493.791.090.840.600.82
DB3 0.763
DB2 0.714
AutonomyAU4 0.7694.171.070.810.560.79
AU1 0.748
AU2 0.732
Logical ThinkingLog10.8603.661.030.860.580.85
Log30.774
Log20.738
Log40.660
AI KnowledgeAIK30.8323.980.950.870.530.87
AIK20.759
AIK40.753
AIK60.719
AIK10.691
AIK50.571
Learning ResourcesLR10.7733.921.050.810.560.79
LR20.754
LR30.722
AlgorithmALG30.9013.690.930.870.610.86
ALG40.873
ALG10.713
ALG20.609
Behavioral IntentionBI20.8273.980.950.890.590.85
BI30.810
BI10.723
BI40.700
Table 2. Correlations in the measured model.
Table 2. Correlations in the measured model.
12345678
Learning Resources(0.75)
AI Knowledge0.22 **(0.73)
Algorithm0.18 *0.19 *(0.78)
Debugging0.20 **0.27 **0.17 *(0.77)
Logical Thinking0.19 *0.20 *0.080.28 **(0.76)
Autonomy0.020.16 **0.01−0.07−0.03(0.75)
Social Good0.030.26 **0.33 **−0.110.36 **0.14 **(0.78)
Behavioral Intention0.28 **0.21 **0.20 *0.130.24 **0.35 **0.42 **(0.77)
Note * p < 0.05; ** p < 0.01. Items on the diagonal are the square roots of the average variance extracted; off-diagonal elements are the correlation estimates.
Table 3. Hypothesis testing results from SEM.
Table 3. Hypothesis testing results from SEM.
HypothesesStandardized EstimateS.E.t-ValuespSupported
Yes/No
H1LRAIK0.2210.0404.245***Yes
H2LRPE0.3390.0515.300***Yes
H3LRBI0.1640.0553.1870.001Yes
H4AIKPE0.2080.0623.438***Yes
H5AIKSG0.0570.0541.0740.283No
H6AIKAU0.2210.0673.935***Yes
H7PEAU−0.2130.096−2.5780.010No
H8PESG0.4220.0835.071***Yes
H9PEBI0.3590.1213.999***Yes
H10SGAU0.2160.0733.413***Yes
H11SGBI0.2210.0753.951***Yes
H12AUBI0.3940.0597.774***Yes
Note. *** p < 0.001. LR: learning resources; AIK: AI knowledge; PE: programming efficacy; AU: autonomy; SG: social good; BI: behavioral intention.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chai, C.S.; Chiu, T.K.F.; Wang, X.; Jiang, F.; Lin, X.-F. Modeling Chinese Secondary School Students’ Behavioral Intentions to Learn Artificial Intelligence with the Theory of Planned Behavior and Self-Determination Theory. Sustainability 2023, 15, 605. https://doi.org/10.3390/su15010605

AMA Style

Chai CS, Chiu TKF, Wang X, Jiang F, Lin X-F. Modeling Chinese Secondary School Students’ Behavioral Intentions to Learn Artificial Intelligence with the Theory of Planned Behavior and Self-Determination Theory. Sustainability. 2023; 15(1):605. https://doi.org/10.3390/su15010605

Chicago/Turabian Style

Chai, Ching Sing, Thomas K. F. Chiu, Xingwei Wang, Feng Jiang, and Xiao-Fan Lin. 2023. "Modeling Chinese Secondary School Students’ Behavioral Intentions to Learn Artificial Intelligence with the Theory of Planned Behavior and Self-Determination Theory" Sustainability 15, no. 1: 605. https://doi.org/10.3390/su15010605

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop