1 Introduction
Artificial Intelligence (AI) technologies have become a driving force for transforming businesses and societies and have been profoundly reshaping our lives and ecosystem (Daugherty et al.,
2019, Choi et al.,
2017). Understanding AI’s impact on societies became vital to demonstrate its capabilities and value. Such innovative technology has demonstrated its potential to positively contribute to their growth and improve the overall quality of life (Schönberger,
2019). However, some apprehensive opinions, worries and securities accompanied AI technologies battling perception issues, i.e. man vs. machine (Wilson & Daugherty,
2018). Therefore, examining the regulatory, governance, and ethical perspectives of AI became necessary to assess its maturity and offerings to tackle various societal challenges, including improving well-being, justice, sustainability, and resilience (Floridi et al.,
2018; Schönberger,
2019). As such, responsible AI emerged to offer the mechanisms for integrating AI technologies in an ethical, transparent and accountable manner to nurture trust and privacy and mitigate its risks (Ghallab,
2019; Wang et al.,
2020).
For digital health technologies, the advancement in healthcare systems and the increase in medical data volume have created AI innovations opportunities. The exploitation of AI technology in healthcare offered new prospects for healthcare providers to develop further in-depth insights towards understanding the causes of different illnesses, medical interventions and associated healthcare activities. Nevertheless, with such fast-paced deployment, there appears to be a limited consideration of AI technology diffusion’s ethical complexities in the health sector (Floridi,
2018; He et al.,
2019). Studies have often pointed out the gap between the development and the adoption of AI tools in healthcare, given the fact that these tools have been designed with little appreciation to the ethical perspective (Ienca et al.,
2018; Fiske et al.,
2019). Besides, Fiske et al. (
2019) pointed out that there have been limited considerations for the societal and ethical implications when integrating AI solutions in a clinical setting.
In the context of the COVID-19 pandemic, governments and public health entities have accelerated the deployment of various innovative technological initiatives, including AI-based digital proximity tracking and tracing applications, in their fight to control the spread of the COVID-19 virus. However, there have been some discrepancies in how the application has been adopted across many countries. Such differences were reflected in the design, development, deployment, diffusion, and ethical considerations resulting in some mixed outcomes in their effectiveness. Thus, an argument can be made for the need to closely examine COVID-19 digital proximity tracking and tracing application deployment against their performance in fighting the pandemic.
While it is evident that AI-based technologies offer opportunities across various contexts, the responsible perspective of AI is yet to be appreciated. Hence, this study aims to examine responsible AI’s considerations in deploying COVID-19 digital proximity tracking and tracing health applications in two countries, the State of Qatar and the United Kingdom. These two countries were chosen due to two main reasons, including convenience and studying generality, by selecting countries to maximise diversity along the dimension in question to explore the scope or universality of a phenomenon. From the political perspectives, both countries have distinctive political sittings, in which both the UK and Qatar political system is a constitutional monarchy. While the government and parliament in the UK take political decisions, sharia law is the primary Qatari legislation source. From an economic perspective, both the UK and Qatar has one of the highest GDP per capita and ranking generally among the top 30 wealthiest countries in the world (IMF,
2021). Besides, both countries have a well-established healthcare system that the government publicly funds from a healthcare perspective. To pursue this, this study will utilise secondary data resources to overcome the limitation of direct access to primary empirical data, especially during the pandemic. As such, existing resources, including official documents, official information platforms, and associated standard repositories coupled with official social media engagement sentiment analysis, will help offer the required diagnostic insights and build necessary data triangulation (Kennedy,
2008). This should help better understand how governments, particularly official healthcare service providers, consider the ethical dimension in the design, implementation, deployment and relevant engagement strategies for these AI-based applications during pandemics. Thus, the emerging good AI society framework (Floridi et al.,
2018) will be utilised in this study as it is built on several well-established universal bioethics principles and is one of the first leading forums dedicated to discussing AI’s social impact, specifically in the digital health domain (Nakata et al.,
2020). Hence, this will offer a responsible AI perspective to verify emerging ethical themes and principles captured through this process.
To address the above research aim, this article first offers a taxonomy on responsible AI in the digital health context in Sect. 2. This will attempt to carefully build a critical understanding of responsible AI’s current status quo in this context. Section 3 presents the elucidates AI-enabled COVID-19 digital proximity tracking and tracing applications to verify related socio-technical issues. Section 4 discusses existing responsible AI frameworks and initiatives with careful attention towards articulating the good AI society framework in the context of AI-based digital proximity tracking and tracing. The methodological approach adopted for this study is presented in Sect. 5, highlighting the contributions of secondary data analysis to research, including the use of sentiment analysis as an emerging phenomenon. The findings of the utilised framework and related sentiment analysis on COVID-19 digital proximity tracking and tracing are then examined in two different contexts, (NHS COVID-19 app) UK and (EHTERAZ) Qatar, in Sect. 6. After that, Sect. 7 discusses the study’s main findings and offer the appropriate synthesis to the extant literature. Finally, Sect. 8 offers relevant conclusions and implications.
2 Responsible AI for Digital Health: Literature Taxonomy
The advancement of AI has created the need for realising its risks and allied impact on businesses and societies in various domains (Schönberger,
2019; Fiske et al.,
2019). Insights derived from AI models to support and automate decision-making processes require careful considerations covering moral, ethical and legal perspectives. In this context, responsible AI emerged to offer the guidance needed to appreciate the ethical side’s perspective for this innovative technology. Martinez-Martin et al. (
2020) argued that appreciating the ethical perspective in any context, including healthcare, should be acknowledged as a whole rather than at an individual level. Furthermore, Dignum (
2019) characterised responsible AI to be
" concerned with the fact that decisions and actions taken by intelligent autonomous systems have consequences that can be seen as being of an ethical nature…Responsible AI provides directions for action and can maybe best be seen as a code of behaviour — for AI systems, but, most importantly, for us”. This definition verifies the significance of ethics for this innovative technology and its decisions across all levels. Therefore, upholding an ethical and responsible outlook of AI and machine learning technologies becomes imperative for AI applications to prevail.
In the context of digital health technologies, the ability for AI algorithms to learn from existing clinical data is offering limitless opportunities for many healthcare providers. This can include improving patients diagnosis, refining medical decision making, offering a robust personalised medicine, as well as enhancing patients experiences (Amato et al.,
2013; Bennett & Hauser,
2013; Dilsizian & Siegel,
2014; Chin-Yee & Upshur,
2019; Harerimana et al.,
2018; He et al.,
2019). Nonetheless, exploitation of these algorithms has raised many concerns in many areas, including the ethical side. In particular, Morley and Floridi (
2020) argued that healthcare providers would need to consider the ethical risks associated with AI deployment, given its impact on delivering healthcare services. Furthermore, Challen et al. (
2019) and He et al. (
2019) claimed that while establishing rules and regulatory measures can offer some governance and compliance mechanisms, observing end-user ethical rights and appreciating the value remains a challenge.
Furthermore, AI has been used in all the significant healthcare domains broadly either by automating tasks or augmenting decision-making (Jiang et al.,
2017). Using AI for augmentation has been a preferred approach to integrating humans and intelligent machines for collective and supporting decision-making capabilities (Davenport & Kirby,
2015). Although AI applications in healthcare have accrued excellent resource efficiency and productivity gains (through automation) and improved quality of work and versatility (through augmentation), they also have raised many ethical concerns, including issues of privacy, security, trust, responsibility, and accountability (Murphy et al.,
2021). Acknowledging the necessary need for the power of AI and the irreversible tide of its use, many organised and scholarly efforts have been advancing frameworks and strategies (Floridi et al.,
2018, Leslie,
2020, Murphy et al.,
2021) to mitigate the ethical concerns posed by the applications of AI in healthcare. Nonetheless, the pandemic has added another complicating factor (urgency) to the discourse on the delicate balance between leveraging AI’s power and mitigating ethical concerns. While some countries opted to assign more weight to the use of AI applications to help control the pandemic, even if some sacrifices are made on the ethical aspects, others assigned more weight to the ethical use even if this comes at a slower pace of controlling the spread of the virus. Such debate is still unresolved and will most likely resurface with future pandemics, although hopefully with less intensity as more maturity might be achieved in embedding ethical design in future AI applications due to advancement in ethical AI scholarship practice. Table
1 presents a taxonomy on the main goals of using AI in healthcare, sample applications, ethical concerns, and mitigating strategies.
Table 1
Taxonomy on AI goals in healthcare and associated ethical concerns
Automation | • Robot carers for the elderly • Robotic surgery • Wearable medical devices • Ambient intelligence | • Issues of privacy, security, trust, accountability, responsibility, and bias • Issues of social isolation • Issues of human dignity • Issues of autonomy | • A participatory approach to AI development; engagement of end-users and beneficiaries; shared responsibility for all; and appropriate and responsible AI technology governance through regulatory mechanisms and infrastructure (Murphy et al., 2021). • Adopting open science and sharing data responsibly; caring and acting through responsible research and innovation; adopting ethical principles to create a shared vocabulary for balancing and prioritising conflicting values; generating and cultivating public trust through transparency, accountability, and consent; and fostering equitable innovation and protecting the interests of the vulnerable (Leslie, 2020) | Murphy et al. ( 2021), Paul et al. ( 2018), Iyengar et al. ( 2018), DeCamp and Tilburt ( 2019), Smith ( 2020), Challen et al. ( 2019), Parikh et al. ( 2019), Sorell and Draper ( 2014), Van Wynsberghe ( 2016), Martinez-Martin ( 2020) |
Augmentation | • AI-powered diagnosis systems • AI-powered clinical decision support systems • Radiomics and radiology image processing and recognition | • Issues of privacy, security, trust, accountability, responsibility, and bias • Issues of explainability • Disruptions to provider-patient interactions • Issues of gene editing and human genetic choices | Murphy et al. ( 2021), Paul et al. ( 2018), Iyengar et al. ( 2018), DeCamp and Tilburt 2019; Smith ( 2020), Challen et al. ( 2019), Parikh et al. ( 2019), Morley et al. ( 2020a, b), Rigby ( 2019), Minari et al. ( 2018), Fiore and Goodman ( 2016), Sabatello ( 2018), Adhikary et al. ( 2019), Leslie ( 2020), Schönberger ( 2019) |
The abovementioned challenges have identified the need to develop holistic guidance towards addressing the ethical and responsible perspectives on AI. As such, clinicians, policy and lawmakers, AI specialists, associated end-users entities, and ethics experts will need to work closely towards developing appropriate and applicable frameworks and guidelines to mitigate these identified ethical risks. By doing so, will help in establishing a baseline for those looking to implement healthcare AI-based solutions.
3 Elucidating AI-Enabled Covid-19 Digital Proximity Tracking and Tracing
Recent studies identified various AI solutions to tackle some pandemic outbreaks, including Ebola and COVID-19. In this regard, Colubri et al. (
2019) verified the potential and applicability of using machine learning algorithms to set up prognostic models for examining the outbreaks of the Ebola epidemic. Similarly, To˘gaçar et al. (
2020) deployed artificial deep learning models for examining COVID-19 in chest x-ray images increasing the efficiency in detecting the disease. Furthermore, Choi et al. (
2017) offered insights into the benefits of employing AI and machine learning models to better understand outbreaks from a public health perspective. These studies highlight the potential of utilising various machine learning and AI algorithms in examining public health outbreaks.
Nonetheless, the World Health Organisation (WHO), research, clinical communities, and the public health authorities have endeavoured to explore innovative technologies in the fight to control the spread of the COVID-19 virus. This comprised tracing and contacting those infected to isolate and prevent them from infecting others. In this context, attempts have been made to employ AI-based solutions that offered considerable capabilities to fight this pandemic. Specifically, the expectations were focused on providing the needed support for clinical predictions and policy-focused decisions by improving screening (Wong et al.,
2019; Char et al.,
2020). In the context of the COVID-19 virus spread, AI-enabled digital proximity contact tracing applications appear to be effective tools that can help break the virus’s transmission chain (O’Neill et al.,
2020). In principle, they can identify and manage exposed and infected individuals and help avoid the virus to spread. The underlying technology utilised in these applications is based on Bluetooth, Global Positioning System (GPS), Social graph, contact details, network-based Application Programming Interface (API), mobile tracking data, card transaction data, and system physical address (Lalmuanawma et al.,
2020). These technologies are often deployed through centralised, decentralised, or even sometimes, a hybrid of both mechanisms, offering an agile approach towards capturing data in real-time. Here, AI’s role focuses on providing much-needed insights and analysis for tracing infected and vulnerable individuals (Wikipedia,
2020; O’Neill et al.,
2020).
While such an AI-based tool has demonstrated its capabilities in fighting infections (Rorres et al.,
2018), concerns about the contact tracing applications focused on ethics, privacy and data management and control. As such, various countries imposed specific policies, and sometimes laws, in their attempt to successfully deploy these applications in their fight against the pandemic despite these concerns. In some cases, several contact and tracing apps violate privacy laws and are sometimes deemed unsafe (O’Neill et al.,
2020). Furthermore, ethical questions concerning these applications have also started to surface. This included questions that revolved around whether they are mandatory or voluntary and those directed towards access control, handling, storing, and disposal of collected data (Lalmuanawma et al.,
2020; O’Neill et al.,
2020).
4 Responsible AI Frameworks and Initiatives
The realisation of ethical and risk implications for AI has attracted momentous attention to offer the necessary guidance for developing and deploying AI applications (Dilsizian & Siegel,
2014; Chin-Yee & Upshur,
2019; Rorres et al.,
2018). A limited number of initiatives, recommendations, research studies, principles and strategies were introduced to offer this essential foundation. In this regard, when examining the literature on frameworks of models targeting the topic of AI ethics, or responsible AI, the results appear to be modest. Most of these initiatives have often been focused on establishing various groups/organisations and setting out principle documents and reports to provide necessary guidance within this domain (Murphy et al.,
2021; Floridi,
2018). This includes Partnership on AI (
https://www.partnershiponai.org/), OpenAI (
https://openai.com), Foundation for Responsible Robotics (
https://responsiblerobotics.org/), Ethics and Governance of Artificial Intelligence Initiative (
https://aiethicsinitiative.org), Montréal Declaration for Responsible Development of Artificial Intelligence (University of Montreal,
2017), Principles for Accountable Algorithms (Jobin et al.,
2019), the Good AI Society Framework (AI4people) (Floridi et al.,
2018), and Leslie (
2019) who attempted to provide some guidelines focused towards the explainability of AI systems designs and implementation from a practical perspective. While the aforementioned suggests the area of ethics and AI appears to be nascent, Floridi et al. (
2018) AI4poeple appears to be the most conventional academic framework given it has been built on established bioethics research, specifically in the health sector.
The good AI society framework (AI4People) established by Floridi et al. (
2018) has been built on several well-established universal bioethics principles, which have been widely adopted in the area of applied ethics (Floridi,
2013; Beauchamp & Childress,
2001). The AI4People framework is one of the first leading forums dedicated to discussing AI’s societal impact (Nakata et al.,
2020). Furthermore, this framework has been acknowledged to protect societal values and principles (Morley et al.,
2020a,
b). The AI4People framework offers a synthesised and comprehensive five well-rounded principles: beneficence, non-maleficence, autonomy, justice and explicability. The first principle, beneficence, verifies whether the AI application is or can do only good for society. It mainly focuses on understanding how AI can support and promote human well-being throughout these systems’ design. Besides, appreciating how the development of AI can be instrumental in empowering people in order to preserve their dignity. Concurrently, continuing the prosperity of humankind and the protection of the environment for the future. On the other hand, the second principle, non-maleficence, targets the guarantee that AI does not harm humanity. To do so, maintaining personal security and privacy becomes necessary to control and protect personal data and deter any potential misuse of AI. Nonetheless, managing this appropriately requires continuous efforts in promoting a thoughtful understanding of intentional or unintentional harms. For autonomy, the third principle, the attention is directed towards managing decision-making responsibilities. In this regard, individuals will need to be empowered enough to make decisions rather than expecting machines/artificial agents to take them entirely on their behalf. The fourth principle of justice targets the upholding of prosperity and solidarity. To do so, AI will be expected to inspire fairness and stop discrimination from sharing its benefits while being conscious of limiting any deviations to existing social structures that might create some new harms. However, while these four principles have been built on previous well-established bioethics principles, the fifth principle, explicability, focuses on understanding how humankind can establish and build trust with AI from a decision-making capability perspective to realise its value. Therefore, a clear explanation for AI decisions, i.e. intelligibility, and verifying responsibility and accountability, will be required. For the first, it is motivated towards improving the visibility of the AI decision-making process. As for the second, it is more concerned with the ability to audit these decisions.
5 Methodology
Given the limited availability of the information resources related to this study, this research utilised two forms of secondary data sources to provide the applied and diagnostic perspective to the developed framework on responsible AI. The first is based on official documents and sources and public media dissemination reports provided through the two digital proximity tracking and tracing apps under study. The second type targets Twitter’s microblogging platform by conducting the appropriate sentiment analysis. Secondary data availability for scientific research has offered opportunities to provide new perspectives on various research problems across many disciplines (Sarker et al.,
2020). Hox and Boeije (
2005) stated that secondary data could be identified as the exploitation of existing data that has been collected for different purposes to answer a different research problem. Patzer (
1995) and Shmueli (
2010) highlighted how taking a careful perspective of recent, relevant and accurate data can achieve a high sense of validity and reliability needed for conducting impactful scientific research.
Sarker et al. (
2020) argued how, in information systems, primary data collection could be challenging due to financial and human resources constraints. They also claimed that such challenges could be associated when the phenomena being examined are at the macro/country level. As such, it gives a valuable opportunity to support and enable a macro-level validation and view of phenomena being examined at a large scale, such as the country level. Moreover, maintaining a bottom-up approach, as in secondary data exploration, can offer the opportunity to closely examine theories, frameworks and models (Constantiou & Kallinikos,
2015). Similarly, Yin (
2009) argued how secondary data could be acknowledged and used as a standard and reliable source of information. Besides, Zhang (
2016) claimed how media outlets often offer valuable insights into current trends and approaches derived by government officials. In this regard, a diverse range of secondary sources, including reports and official and formal web content, can offer an in-depth and comprehensive view of these initiatives’ dynamics and associated applications. Hence, offering opportunities to help contribute to the advancement of knowledge (Lazer et al.,
2014).
Furthermore, the advancement of innovative technologies and the exponential growth and availability in data, i.e. big data, have offered opportunities for a plethora of analytical tools, applications and methods (McAfee & Brynjolfsson,
2012; Clarke,
2016). It enabled the capturing of unconventional insights, especially for those generated through various social networking outlets, and Natural Language Processing (NLP) method has been recognised as one of these innovative tools (Sun et al.,
2017; Li et al.,
2015; Collobert et al.,
2011). As such, sentiment analysis has emerged as an innovative opinion-mining tool supporting the measurement of opinions, thoughts and behaviours based on textual data (Ji et al.,
2015; Liu,
2012). The lexicon-based approach is regarded as one of the main techniques utilised to offer the required sentiment scoring (Sun et al.,
2017; Jockers,
2017). Such scores are classified as positive, negative and neutral sentiments, and they are built on a dictionary of words offering the mechanisms to determine data polarity (Sun et al.,
2017; Ravi & Ravi,
2015).
Microblogging social networking platforms have attracted a considerable amount of attention for conducting such an analysis. In particular, Twitter has been identified as one of the most popular social networking applications often exploited by an array of diverse users, including consumers, government entities, policymakers, businesses, and decision-makers (Kwak & Grable,
2021). This microblogging platform offered endless opportunities to examine many entities’ attitudes and behaviours, specifically policymakers, to help build strategies for a better understanding and engagement (Kang et al.,
2017).
The abovementioned techniques and associated tools offer the opportunity to examine an applied viewpoint to a theoretically developed framework on responsible AI. As such, this is expected to offer much-needed insights regarding how governments, particularly official healthcare service providers, consider the ethical dimension in the design, implementation, and relevant engagement strategies for these AI-based applications during pandemics. For this study, many of these resources included official documents, official information platforms, and associated standard repositories, were utilised. This includes various government websites in the UK and Qatar and publicly available resources, such as official documents and reports and academic reports. Furthermore, Twitter Application Programming Interface (API) has been utilised to apply sentiment analysis for the captured tweets to unfold the engagement tone for both applications and verify the emerging ethical themes captured through this engagement. Such an approach should provide a general perspective on how responsible AI is conceptualised, applied, and practically validated in these natural settings during the pandemic. Thus, this study’s exploitation of the utilised secondary data resources should build necessary data triangulation (Kennedy,
2008).
7 Discussion
It is evident how Artificial Intelligence technologies can offer a promising approach towards tackling societal challenges, specifically in healthcare. In the coronavirus pandemic context, countries have taken different measures and approaches to protect their citizens’ health. This includes utilising innovative technologies, such as AI-enabled digital proximity tracking and tracing applications, to support these measures. Such potential is hoped to help control the pandemic’s spread to withstand the social and economic challenges in many countries (Von Wyl et al.,
2020). While the two countries in this study have deployed such applications, their successes, including adoption rate, penetration and impact, have varied. One potential explanation can be related to ethical, security and privacy considerations. As such, this study examined the considerations of responsible AI in the deployment of COVID-19 digital proximity tracking and tracing applications in the State of Qatar and the United Kingdom. The research findings highlighted some level of divergence for the two apps in terms of responsible AI compliance, considerations and the impact of controlling the virus’s spread. As such, the findings verified that the UK NHS COVID-19 app has a relatively high compliance rate compared to Qatar’s EHTERAZ. Consequently, one would expect that the UK app would have a higher penetration rate and impact than the Qatari counterpart. Counterintuitively, the NHS COVID-19 app has a relatively low penetration level, resulting in limited contributions towards controlling the virus’s spread. On the other hand, the EHTERAZ app achieved a very high level of users’ penetration resulting in a higher level of success in significantly reducing the virus’s spread. It is clear that the success of the apps in curbing the spread of the virus is a function of its adoption rate and that the adoption rate, in the context of the COVID-19 pandemic, can be achieved more by stricter mandate (EHTERAZ) than by stricter compliance with responsible AI considerations. Gostin et al. (
2020) identified the dilemma between voluntary and enforcement compliance. On the one side, there is public health interest, which requires protection from the virus. However, on the other side, imposing restrictions, tracking, and tracing individuals can raise questions about compromising society’s ethical and privacy values. In the UK, it seems the government’s ability to mandate adoption is much more stringently restricted by regulatory frameworks such as the General Data Protection Regulation, which went into effect in 2018. Qatar’s mandatory approach, although cognizant of the country’s Protection of Personal Data Law of 2016, appears to be perhaps founded on the Islamic principle of “preventing harm precedes procuring gain”, thus mandating the use of the app to curb the pandemic (i.e., preventing the harm) takes precedence over procuring the gain of full compliance with the ethical and responsible requirements. Judging on the outcomes of the two approaches in stopping the spread of the virus, it seems the approach taken by Qatar achieved a better success. However, one cannot generalise that the UK’s approach to be somewhat inherently flawed, because of the low penetration rate, despite high compliance with the ethical AI framework. This could also be attributed to a lack of complete awareness of such features by the citizens, active counter push from anti-anything government propaganda, or perhaps fatigue from the social distancing and other measures.
Nonetheless, these diagnostic findings demonstrate an unresolved challenge debating the needed balance between protecting public health in its fight against the pandemic while complying with the societal and ethical requirements. Floridi et al. (
2018) claimed how the public demands a clear realisation of the benefits of AI technologies while appropriately maintaining a robust risk mitigation approach in order to adopt them. Furthermore, Gasser et al. (
2020) argued how decision-makers have the responsibility to provide efficient and effective legal and ethical measures for utilising digital health systems. This validates the need to set up the mechanisms towards building societal and a health perspective resilience to appreciate responsible AI’s role in these difficult times. To do so, cultural and geographical differences become vital to preserving fairness, equality, and inclusion. Therefore, a cross-disciplinary collaboration involving AI experts, medical professionals, policymakers, NGOs and other stakeholders will be needed in order to raise awareness and to promote the benefits of these technologies while at the same time demonstrating its depth and the width for the ethical considerations and responsibility.
8 Conclusions
For digital health technologies, AI algorithms’ ability to learn from existing healthcare data offers unparalleled opportunities for many healthcare providers; ranging from automating numerous healthcare services to augmenting and supporting medical professionals’ diagnosis and decisions (Chin-Yee & Upshur,
2019; Harerimana et al.,
2018; He et al.,
2019). Nonetheless, exploitation of these algorithms has raised many concerns in many areas, including the ethical side. While it appears that AI has enormous potential to revolutionise healthcare services, a considerable amount of efforts concerning ethical and legal work are yet to be addressed (Schönberger
2019). Challen et al. (
2019) and He et al. (
2019) claimed that while establishing rules and regulatory measures can offer some governance and compliance mechanisms, observing end-user ethical rights and appreciating the societal value remain a challenge.
The aim of this study was focused on examining responsible AI’s considerations in the deployment of COVID-19 digital proximity tracking and tracing applications in two countries, the State of Qatar and the United Kingdom. AI4people framework principles were employed to offer the required ethical lens for mapping these AI-based applications. Furthermore, secondary data resources were utilised to conduct the required examination in this study. Official social media engagement sentiment analysis and official documents, platforms, and associated standard repositories were exploited for this purpose to offer the required diagnostic insights, understanding and necessary triangulation. The obtained findings highlighted how the two countries examined in this study had different outcomes and impact. This underlines the need for obtaining a practical and contextual view contributing to the discourse on responsible AI in healthcare. Therefore, striking a balance between responsible AI requirements and managing the pressures towards fighting pandemics will be required.
From implications for the research viewpoint, this study utilised the AI4people framework and sentiment analysis of official engagement activities on Twitter to examine responsible AI consideration in the context of COVID-19 digital proximity tracking and tracing applications. In particular, the study makes a valuable and unique contribution in advancing the knowledge in the applicability and maturity of AI applications from an ethical perspective. By investigating these features, this research has offered valuable insights on realising the challenge in balancing between fulfilling responsible AI requirements while managing the pressures in fighting against pandemics. In addition, this study offers a broad understanding of the consideration of responsible AI in the context of healthcare. In particular, the taxonomy presented in section two contributes to providing the needed acumens of various ethical challenges facing the exploitation of this innovative tool while at the same time addressing relevant mitigating strategies that could help in overcoming such challenges. Furthermore, the study findings offer valuable insights into the need to implement successful and positive engagement strategies to change users’ perceptions of the ethical perspective associated with the adoption and diffusion of innovative technologies, such as AI.
For practice, this study highlighted that AI-based applications’ deployment requires careful considerations for their practical and contextual views. Therefore, all involved stakeholders will need to take a proactive role in implementing responsible AI applications. In the context of digital proximity tracking and tracing, public healthcare providers and policymakers will be required to work closely with third-party entities to design, develop, and deploy these applications effectively. By doing so, this will help in developing the required maturity and at the same time moving forward towards developing and harmonising an integrated responsible AI consideration across all processes. As demonstrated in this study, utilising secondary data can help provide the practical insights needed when access to direct primary empirical data can be limited. Simultaneously, social media data exploitation in examining sentiments and polarity can offer valuable and valid insights to unveil how engagement and communication strategies require careful consideration.
Nonetheless, as with any work, there are several inherent limitations in this study. The examination for AI-based digital proximity tracking and tracing applications has been based on existing secondary data resources, including reports and information available through the applications and Twitter data. As such, this can offer limited insights about the consideration of responsible AI within. It is accepted that the ability to operationalise the measures in this study may not be ideal. While this can be justified given the limited availability of empirical data, it can ultimately impact this study’s overall generalisability. Nonetheless, the work remains focused on presenting a diagnostic perspective. Besides, the adopted AI4People framework offers focused insights towards these applications. Therefore, this requires further investigation through direct and active engagement with relevant stakeholders to capture more in-depth and practical insights. Furthermore, the study has only examined two existing digital proximity tracking and tracing applications, limiting the findings’ generalisability. Therefore, future research should aim to target other countries’ applications to perform a cross-comparison and perform the required measurements. The unit of analysis of this study was not aimed at examining cross-cultural, social and political issues but rather on responsible AI’s perspective. While this is an early attempt to consider responsible AI in a pandemic context, this paper has nevertheless addressed a relevant research gap on acknowledging the role of responsible AI in mitigating ethical risks for the deployment of COVID-19 digital proximity tracking and tracing applications.
The insights obtained from this study offer stakeholders an opportunity to realise the need for building communications bridges internally and externally. It is hoped that this can advance the debate to move forward towards developing and harmonising an aspirational mindset that can help to utilise AI technologies successfully. These conversations can ultimately help acknowledge responsible AI’s role in mitigating its ethics and promote fairness, transparency, prosperity, accountability, and explainability. Furthermore, this study highlights that the debate over responsible AI is still unresolved and will most likely resurface with future health-related challenges. Hopefully, with less intensity as more maturity might be achieved in terms of embedding ethical design in future AI applications as a result of advancement in ethical AI scholarship and practice.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.