1 Introduction
-
RQ1: How do responsible AI technology principles affect healthcare practitioners’ attitudes, satisfaction, and usage intentions?
-
RQ2: Does healthcare practitioner engagement mediate the effects of responsible AI technology principles on their attitudes, satisfaction, and usage intentions?
-
RQ3: How does techno-overload as a techno-stressor affect, through engagement, the impact of responsible AI principles on attitudes, satisfaction, and usage intentions with AI technology?
2 Theoretical Background and Research Model
2.1 Current research on AI in Healthcare
2.2 Ethical Considerations of AI in Healthcare
2.3 Research Model
3 Hypothesis Development
3.1 Autonomy
3.2 Beneficence
3.3 Explainability
3.4 Justice
3.5 Non-maleficence
3.6 The Role of Employee Engagement
3.7 The Moderating Role of Techno-overload
4 Method
4.1 Measures
Constructs | Measurements |
---|---|
Beneficence (Martela & Ryan, 2016) | • I feel that actions of AI have a positive impact on healthcare practitioners and patients. |
• The things AI does contribute to the betterment of society. | |
• AI has been able to improve the welfare of healthcare practitioners and patients. | |
• In general. the influence of AI in the lives of healthcare practitioners and patients is positive. | |
• Non-Maleficence (Carlos Roca et al., 2009) | • I think the AI has sufficient technical capacity to ensure that the data about patients I send cannot be modified by a third party. |
• The AI has enough security measures to protect patients’ personal information | |
• When I send patients’ data via the AI. I am sure that they will not be intercepted by unauthorized third parties. | |
• I think the AI has sufficient technical capacity to ensure that no other organization will supplant patients’ identity. | |
• AI technology makes me feel a sense of choice and freedom in the work activities I undertake. | |
• AI technology makes me feel that my work-related decisions reflect what I really want. | |
• AI technology makes me feel that I have been doing what really interests me in my job. | |
• In my opinion. the outcome of AI assisted decisions was fair. | |
• The process by which AI facilitated decisions was fair | |
• I am satisfied with the way in which the AI assisted decisions. | |
• AI often make decisions in an unbiased and neutral manner. | |
• Explainability (Haesevoets et al., 2019) | • To what extent do you perceive the communication about AI technology in your hospital as transparent. |
• To what extent do you think that how AI works is communicated openly with healthcare practitioners. | |
• To what extent do you think that relevant information about AI technology is shared among all healthcare practitioners in my hospital. | |
• To what extent do you think that healthcare practitioners within your hospital share relevant information with each other. | |
• To what extent do you think that healthcare practitioners within your hospital communicate candidly with each other. | |
• I am forced by AI technology to work much faster. | |
• I am forced by AI technology to do more work than I can handle. | |
• I am forced by AI technology to work with very tight time schedules. | |
• I am forced to change my work habits to adapt to AI technology. | |
• The AI technology keeps me totally absorbed in what I am doing. | |
• The AI technology holds my attention. | |
• The AI technology is fun. | |
• The AI technology is interesting. | |
• The AI technology is engaging. | |
• • When using the AI technology. I was totally absorbed in what I was doing. | |
• Attitude towards AI (Lau-Gesk, 2003) | • AI is bad versus good |
• I do not like versus like AI | |
• My opinion on AI is negative versus positive. | |
• Satisfaction with AI (McLean & Osei-Frimpong, 2019) | • I am satisfied with my experience of AI technology |
• The experience of AI technology is exactly what I needed | |
• The experience of AI technology has worked out as I thought it would | |
How likely are you going to use AI technology in your job in the future? |
4.2 Data Collection
n | % | ||
---|---|---|---|
Gender | Male | 184 | 45.54 % |
Female | 220 | 54.46 % | |
Age | 18–25 | 26 | 6.4 % |
26–33 | 253 | 62.6 % | |
34–41 | 110 | 27.2 % | |
42 and above | 15 | 3.8 % | |
Degree of Education | Bachelor’s degree | 15 | 3.71 % |
Master’s degree | 282 | 69.80 % | |
Doctoral degree and above | 107 | 26.49 % | |
Job title | Intern | 21 | 5.20 % |
Resident physician | 177 | 43.81 % | |
Doctor-in-charge | 109 | 26.98 % | |
Associate senior doctor | 49 | 12.13 % | |
Senior doctor | 13 | 3.22 % | |
Others | 35 | 8.66 % | |
Length of Working | Less than one year | 3 | 0.74 % |
1–4 | 194 | 48.02 % | |
5–8 | 124 | 30.69 % | |
9–12 | 64 | 15.84 % | |
13 and above | 19 | 4.70 % |
5 Results
5.1 Assessment of Measurement Model
Constructs | Items | Loading | Mean | SD | CA | Rho_A | CR | AVE |
---|---|---|---|---|---|---|---|---|
Attitude towards AI (ATA) | ATA01 | 0.893 | 6.084 | 0.810 | 0.818 | 0.823 | 0.892 | 0.734 |
ATA02 | 0.846 | |||||||
ATA03 | 0.830 | |||||||
Autonomy (ATO) | AU01 | 0.717 | 5.525 | 0.912 | 0.666 | 0.726 | 0.803 | 0.577 |
AU02 | 0.728 | |||||||
AU03 | 0.829 | |||||||
Beneficence (BE) | BE01 | 0.794 | 5.855 | 0.742 | 0.806 | 0.817 | 0.872 | 0.631 |
BE02 | 0.836 | |||||||
BE03 | 0.717 | |||||||
BE04 | 0.826 | |||||||
Engagement (ENG) | ENG01 | 0.730 | 5.579 | 0.770 | 0.841 | 0.843 | 0.883 | 0.557 |
ENG02 | 0.771 | |||||||
ENG03 | 0.700 | |||||||
ENG04 | 0.720 | |||||||
ENG05 | 0.766 | |||||||
ENG06 | 0.787 | |||||||
Explainability (EX) | EX_TR01 | 0.708 | 5.257 | 0.963 | 0.820 | 0.824 | 0.874 | 0.582 |
EX_TR02 | 0.772 | |||||||
EX_TR03 | 0.793 | |||||||
EX_TR04 | 0.805 | |||||||
EX_TR05 | 0.730 | |||||||
Usage Intention (UI) | UI | 1.000 | 5.869 | 0.917 | 1.000 | 1.000 | 1.000 | 1.000 |
Justice (JUS) | JUS01 | 0.837 | 5.371 | 0.900 | 0.811 | 0.818 | 0.875 | 0.638 |
JUS02 | 0.794 | |||||||
JUS03 | 0.809 | |||||||
JUS04 | 0.751 | |||||||
Non-Maleficence (NM) | NM_SE01 | 0.863 | 5.011 | 1.143 | 0.882 | 0.889 | 0.918 | 0.738 |
NM_SE02 | 0.857 | |||||||
NM_SE03 | 0.842 | |||||||
NM_SE04 | 0.874 | |||||||
Satisfaction with AI (SAT) | SAT01 | 0.853 | 5.374 | 0.887 | 0.759 | 0.765 | 0.861 | 0.675 |
SAT02 | 0.790 | |||||||
SAT03 | 0.820 | |||||||
Techno-Overload (TO) | TO01 | 0.913 | 5.010 | 1.088 | 0.816 | 1.003 | 0.860 | 0.610 |
TO02 | 0.768 | |||||||
TO03 | 0.665 | |||||||
TO04 | 0.757 | |||||||
Note: SD = Standard deviation; CA = Cronbach’s; AVE = Average variance extracted; CR = Composite reliability |
ATA | AU | BE | ENG | EXP | JUS | NM | SAT | UI | |
---|---|---|---|---|---|---|---|---|---|
ATA | 0.857 | ||||||||
AU | 0.187 | 0.760 | |||||||
BE | 0.355 | 0.497 | 0.795 | ||||||
ENG | 0.397 | 0.536 | 0.614 | 0.746 | |||||
EXP | 0.182 | 0.300 | 0.327 | 0.479 | 0.763 | ||||
JUS | 0.293 | 0.491 | 0.528 | 0.631 | 0.590 | 0.799 | |||
NM | 0.227 | 0.250 | 0.248 | 0.421 | 0.470 | 0.470 | 0.859 | ||
SAT | 0.266 | 0.408 | 0.403 | 0.614 | 0.478 | 0.586 | 0.417 | 0.821 | |
UI | 0.331 | 0.444 | 0.549 | 0.656 | 0.300 | 0.428 | 0.221 | 0.505 | 1.000 |
Note: The diagonal elements in bold denote the square root of the AVE; the off-diagonal elements are the correlations between the factors. For discriminant validity, diagonal elements should be larger than off-diagonal elements. |
5.2 Structural Model and Hypotheses Testing
SRMR | 0.062 | ||
---|---|---|---|
Construct | R2 | R2Adjusted | Q2 |
Attitude towards AI | 0.190 | 0.177 | 0.129 |
Engagement | 0.560 | 0.555 | 0.307 |
Satisfaction with AI | 0.463 | 0.455 | 0.298 |
Usage intention | 0.473 | 0.465 | 0.439 |
5.3 Mediating Effects of Engagement
Path | β | t statistics | p-value | f2 |
---|---|---|---|---|
AU -> ENG | 0.192 | 4.233 | 0.000 | 0.057 |
BE -> ENG | 0.323 | 5.951 | 0.000 | 0.153 |
EXP -> ENG | 0.115 | 2.354 | 0.019 | 0.018 |
JUS -> ENG | 0.240 | 3.152 | 0.002 | 0.061 |
NM -> ENG | 0.126 | 2.957 | 0.003 | 0.026 |
Note: AU autonomy, BE beneficence, EXP explainability, JUS justice, NM non-maleficence, ENG engagement, ATA attitude towards AI, SAT satisfaction with AI, UI usage intention. |
Direct effect without mediator | Direct effect with mediator | Indirect effect via mediator | Mediation type | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Path | β | T statistics | 5.00 % | 95.00 % | β | T statistics | 5.00 % | 95.00 % | β | T statistics | 5.00 % | 95.00 % | |
AU -> ENG -> ATA | -0.092 | 1.494 | -0.188 | 0.014 | -0.037 | 0.606 | -0.132 | 0.067 | 0.055 | 2.932** | 0.026 | 0.087 | Full |
AU -> ENG -> SAT | 0.058 | 1.160 | -0.027 | 0.135 | 0.126 | 2.610** | 0.045 | 0.204 | 0.069 | 2.840** | 0.035 | 0.115 | Full |
AU -> ENG -> UI | 0.087 | 1.712* | 0.003 | 0.169 | 0.186 | 3.365*** | 0.094 | 0.275 | 0.100 | 3.808*** | 0.060 | 0.146 | Partial |
BE -> ENG -> ATA | 0.198 | 2.721** | 0.080 | 0.320 | 0.291 | 4.394*** | 0.182 | 0.399 | 0.093 | 3.137** | 0.044 | 0.141 | Partial |
BE -> ENG -> SAT | -0.030 | 0.526 | -0.130 | 0.058 | 0.085 | 1.568 | -0.008 | 0.170 | 0.115 | 3.431*** | 0.066 | 0.176 | Full |
BE -> ENG -> UI | 0.217 | 3.795*** | 0.128 | 0.316 | 0.385 | 6.322*** | 0.285 | 0.485 | 0.168 | 5.368*** | 0.117 | 0.220 | Partial |
EXP -> ENG -> ATA | -0.061 | 0.874 | -0.169 | 0.060 | -0.028 | 0.407 | -0.136 | 0.089 | 0.033 | 1.947* | 0.008 | 0.063 | Full |
EXP -> ENG -> SAT | 0.116 | 1.805* | 0.011 | 0.222 | 0.157 | 2.565** | 0.058 | 0.259 | 0.041 | 2.085* | 0.012 | 0.076 | Partial |
EXP -> ENG -> UI | -0.002 | 0.027 | -0.096 | 0.096 | 0.058 | 1.039 | -0.033 | 0.153 | 0.060 | 2.342* | 0.019 | 0.104 | Full |
JUS -> ENG -> ATA | 0.048 | 0.631 | -0.074 | 0.179 | 0.117 | 1.343 | -0.023 | 0.263 | 0.069 | 1.987* | 0.022 | 0.135 | Full |
JUS -> ENG -> SAT | 0.235 | 3.661*** | 0.134 | 0.344 | 0.321 | 5.120*** | 0.220 | 0.426 | 0.086 | 3.103** | 0.042 | 0.134 | Partial |
JUS -> ENG -> UI | -0.028 | 0.423 | -0.148 | 0.074 | 0.096 | 1.220 | -0.039 | 0.220 | 0.125 | 2.594** | 0.056 | 0.213 | Full |
NM -> ENG -> ATA | 0.086 | 1.262 | -0.031 | 0.188 | 0.122 | 1.794* | 0.002 | 0.223 | 0.036 | 2.547** | 0.014 | 0.060 | Full |
NM -> ENG -> SAT | 0.094 | 1.802* | 0.009 | 0.181 | 0.139 | 2.568** | 0.052 | 0.230 | 0.045 | 2.342* | 0.018 | 0.081 | Partial |
NM -> ENG -> UI | -0.059 | 1.224 | -0.134 | 0.025 | 0.006 | 0.121 | -0.073 | 0.094 | 0.065 | 2.891** | 0.030 | 0.104 | Full |
β | t statistics | p-value | 5.00% | 95.00% | |
---|---|---|---|---|---|
TO * JUS -> ENG | -0.174 | 1.749* | 0.040 | -0.272 | 0.051 |
β | t statistics | p-value | 5.00% | 95.00% | |
---|---|---|---|---|---|
TO * JUS -> ENG -> ATA | -0.050 | 1.486 | 0.069 | -0.096 | 0.012 |
TO * JUS -> ENG -> SAT | -0.062 | 1.792* | 0.037 | -0.093 | 0.020 |
TO * JUS -> ENG -> UI | -0.091 | 1.692* | 0.045 | -0.149 | 0.025 |
Research Hypothesis | Results | ||
---|---|---|---|
H1 | Autonomy of AI is positively related to the healthcare practitioners’ | ||
H1(a) | attitudes toward AI. | Not supported | |
H1(b) | satisfaction with AI. | Not supported | |
H1(c) | usage intention to AI. | Supported | |
H2 | Beneficence of AI is positively related to healthcare practitioners’ | ||
H2(a) | attitudes toward AI. | Supported | |
H2(b) | satisfaction with AI. | Not supported | |
H2(c) | usage intention to AI. | Supported | |
H3 | Explainability of AI is positively related to healthcare practitioners’ | ||
H3(a) | attitudes toward AI. | Not supported | |
H3(b) | satisfaction with AI. | Supported | |
H3(c) | usage intention to AI. | Not supported | |
H4 | Justice of AI is positively related to healthcare practitioners’ | ||
H4(a) | attitudes toward AI. | Not supported | |
H4(b) | satisfaction with AI. | Supported | |
H4(c) | usage intention to AI. | Not supported | |
H5 | Non-maleficence of AI is positively related to healthcare practitioners’ | ||
H5(a) | attitudes toward AI. | Not supported | |
H5(b) | satisfaction with AI. | Supported | |
H5(c) | usage intention to AI. | Not supported | |
H6 | Responsible AI signals (autonomy, beneficence, explainability, justice, and non-maleficence) are positively related to employee engagement. | Supported | |
H7 | Employee engagement mediates the relationship between responsible AI signals (autonomy, beneficence, explainability, justice, and non-maleficence) and behavioral consequences (attitudes toward AI, satisfaction with AI, and usage intention to AI). | Supported | |
H8 | Techno-overload moderates the relationship between Justice of AI and employee engagement, such that high techno-workload weakens the effect of Justice of AI on employee engagement. | Supported | |
H9 | Techno-overload moderates the mediating effect of engagement on the relationship between Justice of AI and behavioral and attitudinal outcomes, such that the techno-overload weakens the indirect effect of Justice of AI on healthcare practitioner’s | ||
H9(a) | attitudes toward AI. | Supported | |
H9(b) | satisfaction with AI. | Supported | |
H9(c) | usage intention to AI. | Supported |