1.2.1 Applied Example No. 1: Electronic Health Records (EHRs)
One of the key prerequisites for improving the delivery of care services through data science is the efficient collection, storage, analysis, and exchange of health information across different service levels in a secure yet practical fashion. Data science tools, such as machine learning and deep learning, rely heavily on massive collections of labelled structured and unstructured data, in order to train models and subsequently improve them to guide decision. Hence, a data acquisition pipeline is paramount. For this purpose, EHRs have become indispensable tools to carry patient health information and facilitate its use between different levels of care. This is reflected by the increasing number of national HIT strategies around the globe, starring the implementation and development of EHR systems (World Health Organization
2016).
High income countries have seen the largest investments in EHR systems. In the United States, the Health Information Technology for Economic and Clinical Health Act (HITECH Act of 2009) spurred on the rapid digitization of the healthcare delivery system, eventually culminating in the Medicare and Medicaid EHR Incentive Programs (Washington et al.
2017). Here, EHRs have provided accurate and up-to-date information at the point of care, enabled quicker access to patient records for more coordinated care among healthcare providers, and reduced healthcare costs by decreasing the amount of paperwork and duplicate diagnostic exams while streamlining coding and billing services as a result of complete and accurate documentation of all transactions.
However, the adoption and implementation of EHRs have been a great source of both satisfaction and consternation, particularly in the last ten years. In recent years, physicians’ satisfaction with EHRs have become universally low (Shanafelt et al.
2016), probably due to an increasing workload and the incentives received as a result of complete documentation. Unintentionally, this has gradually become a burden for providers around the world by negatively affecting their relationship with patients and clinical workflows (Goldberg
2018). In 2016, a study by Shanafelt, et al., revealed that physicians who used EHRs and computerized physician order entry (CPOE) systems,
e.g., electronic prescribing, demonstrated a lower level of work satisfaction due to the amount of time spent on clerical tasks and had an increased risk of burnout (Shanafelt et al.
2016). Moreover, physicians have progressively become concerned that medical malpractice liability may increase with the implementation of CPOE systems due to the increased documentation of computer-related errors (Mangalmurti et al.
2010). In LMICs, this set of problems could prove even more troublesome as the working conditions for health professionals are likely to be even more challenging. Adding an EHR system without taking into consideration all its implications could have disastrous consequences for all aspects of care provision and staff satisfaction.
Further examples of this ambivalence are tools that count as subordinate functions of EHRs, including CPOE systems and Clinical Decision Support (CDS) systems. As mentioned above, electronic prescribing is one prominent example of a CPOE system. As opposed to handwritten prescriptions, the electronic prescribing alternative promises greater prevention of medication errors. Reasons for this are increased completeness, standardization, and legibility of pharmaceutical prescriptions, as well as their frequent integration with CDS tools (Puaar and Franklin
2017). CDS systems are digital applications to “provide alerts, reminders, prescribing recommendations, therapeutic guidelines, image interpretation, and diagnostic assistance” (Khairat et al.
2018) and are often deployed to complement CPOE systems. This integration leads to an enhancement of patient safety with lower prescriptions errors and improved interprofessional communication between healthcare providers (Mills et al.
2017).
However, despite the proven potential of CDS for electronic drug alerts by reducing the number of adverse drug events and lowering healthcare costs (Ash et al.
2007); (Weingart et al.
2009), it is one of the leading causes of alert fatigue in healthcare providers. Alert fatigue describes a phenomenon where the user,
i.e. the clinician, actively ignores or dismisses pop-up windows, warning the user of possible errors or dangers with clinical information that they entered. Alerts for drug-drug interactions, pre-existing drug allergies, weight-adjusted dosing etc., often appear very frequently, hence ‘fatiguing’ the user’s attention to them (Backman et al.
2017). It has been shown to debilitate the power of alerts, especially if deemed obvious or irrelevant, leading clinicians to dismiss future pop-ups without reading potentially important alert messages (Ash et al.
2007). Consequences of alert fatigue could lead to the user’s impression of being supervised and treated as distrusted in their own decision-making with resentment due to the continuous interruption in their work. In order to prevent this, it is therefore imperative to ensure user-friendliness, along with the relevance and appropriateness of alerts in the given clinical context when designing CDS systems (Ash et al.
2007).
Considering these benefits and drawbacks of EHRs, along with CPOE and CDS, that are inherent to their integration, they certainly all allow for the further collection of data by the digitization and integration of workflows, such as the prescription of medication. Before the integration of HIT, these processes either used to be analogue or non-existent, whereas now they can be streamlined and interoperable with one another. This may ease the documentation burden of handwritten notes and enable the collection of even more clinical data, which can much more readily find application for research or in public health. However, as with all technological novelties in healthcare, if not integrated well, all systems can become cumbersome and in several ways harmful to deliver good care. Healthcare professionals may circumvent the correct use of these electronic systems, which may negatively impact overall efficiency, effectiveness of care, and patient safety (Blijleven et al.
2017).
The full impact of EHRs on data science in global health is challenged by smaller and larger scale problems, ranging from human issues to technical difficulties. It begins with the above mentioned issues, e.g., low EHR usability and staff resistance, and ends with the major, systems-based problems of healthcare, such as the increase in healthcare cost, the rate of medical errors, or exhaustion and shortage of workforce, all of which limit the integration and adequate maintenance of EHR systems.
1.2.2 Applied Example No. 2: Health Information Exchange (HIE)
Health information exchange (HIE) is the mobilization and transfer of electronic health information within and across organizations in a community, region, or country, ideally through interoperable health information systems (Finn
2011). It allows healthcare providers and patients to securely access medical information electronically in order to appropriately and confidentially share patient’s health information independent of where they are receiving care (HealthIT.gov
2017). The United States, Canada, Australia, and the UK are some of the countries who have, to a certain extent, successfully implemented regional or state-wide HIEs.
An effective implementation of HIE is critical to provide high-quality, tailored care to patients while reducing costs and increasing access (Sadoughi et al.
2018). To do this, HIE implementation must be aligned with inner-organizational as well as inter-organizational needs and priorities, with mutual cooperation and collaboration being crucial in fostering HIE. In 2016, Eden, et al., showed that facilitators and barriers to successful HIE implementation could be categorized as to the completeness of information, organization and workflow, and technology and user needs (Eden et al.
2016). In particular, the lack of consistent terminology and classification of HIE was found to be a considerable barrier to understanding how an HIE ideally functions, as well as constant changes in sociotechnical systems (Eden et al.
2016). These findings are consistent with the 2016 study of Akhlaq, et al., done for LMICs, wherein they found that successful HIE implementations largely depend on effective policies, strong leadership, and governance in order to create an evidence-based decision-making culture within organizations (Akhlaq et al.
2016).
Revisiting the concept of EHRs and thinking a step ahead, being able to not only access local clinical data but to exchange data across departments, organizations, regions, or even nations through HIEs, a whole new extent of data availability becomes apparent. Due to these vast amounts of data being necessary in order to leverage its full potential, it symbolizes a key requirement for data science adding real-world value and effectively integrate AI on a broader scale. Still being far from achieving a smooth and widespread HIE across regions and countries, for the most part, we can only speculate on the impact the analysis of all this data can have once this will be achieved. As a result of these HIE networks, we need to find the value of all the accumulated data, not only by making medical practice more evidence-based but also in the field of, e.g., population-based informatics, or genetic and genomic information. Generally, once data is available, analysing it and drawing conclusions from it for clinical practice is relatively easy as compared to the far greater hurdle of translating these findings from ‘bench to bedside’.
Taking another step ahead, blockchain technology (
for more details, please refer to subsequent chapters of this book) has been proposed as a tool to provide the necessary features for long sought after advancements in the industry, especially with regard to increased interoperability, high data security, and seamless HIE (Gordon and Catalini
2018). Similarly, a good amount of temperance may also be needed. Blockchain has been envisioned to change the status quo in clinical research, public health, patient identification systems, and self-generated health data (Gordon et al.
2017). It has also been explored in terms of improving global health (Metcalf
2019, p. 415). However, one has to keep in mind that health systems are multifaceted, highly fragmented, and very resistant to change. Thus, expectations should be kept realistic in the face of persistent doubts on its sectoral need, appropriate use, and whether it can truly change existing health systems—particularly on data handling, because of the lack of scalable real-world implementations (Gordon et al.
2017).
Besides the necessary technological prerequisites, key to the successful implementation of an HIE are governance, consistency in technical nomenclature, and effective change management. These three factors are determined to have significant effects on the level of adoption and success that organizations experience when implementing an HIE. As they are extremely difficult to achieve due to the need for many disparate parties to align, conferences can foster the conversations that enable broader and effective change management, bringing all different stakeholders to the table. It outlines the need for collaboration as health data, if collected in an orderly and accessible fashion, is still mostly siloed unless governance and other initiatives drive parties towards unification and liberation of data.
1.2.3 Applied Example No. 3: Artificial Intelligence
Given that EHRs and HIE have done their work to provide data in an accessible and orderly fashion so that it can be further utilized, artificial intelligence (AI) can be applied in helping to improve everyday clinical questions, predict disease outbreaks in LMICs, monitor drug adherence, etc. The application of these new technologies have spurred excitement and brought renewed hope in finding solutions to the intricate and sophisticated problems inherent to global health.
With AI applied to healthcare and global health, we associate the use of computer algorithms and statistical models to enhance the human understanding of complicated medical information and coherences by analyzing medical data. Specifically, AI usually refers to tasks performed by computers that would otherwise require intelligence if executed by humans (The Alan Turing Institute
2018).
The field of AI has evolved over the last 60 years. First described in 1971 in medical publications (Coiera
1996), it is only now that many AI applications have been deployed in healthcare settings and there are signs indicating that AI adoption has been growing exponentially. Areas that would benefit from added value through data-driven solutions can be classified as having either a ‘patient focus’ and/or ‘healthcare provider/payers focus’ (Garbuio and Lin
2019). Within clinical medicine, as part of the latter focus, there is a myriad of specialties that would benefit from the integration of AI engines, with possible tasks ranging from natural language processing over clinical decision support to predictive analytics (Dankwa-Mullan et al.
2018); (Yu and Kohane
2018) (
for more details, please refer to subsequent chapters of this book). Despite the fact that a range of those AI applications have already proven to perform on par with experienced clinical specialists (Esteva et al.
2017), many experts see AI’s future role in complementing human knowledge and decisions, by rapidly exploiting vasts amount of data, instead of replacing doctors (Dankwa-Mullan et al.
2018). Hence, most AI applications in healthcare are aimed at working in synergy with staff instead of striving for a substitution of workforce.
One major application of AI-assisted medicine is the ability to make reliable and accurate predictions on clinical outcomes, hence assisting clinicians in critical everyday decisions, for example by finding the optimal treatment strategy for patients with sepsis (Komorowski et al.
2018) or utilizing warning algorithms and severity of illness scores in intensive care (AIMed
2018). Other examples include radiological or pathological image processing through deep neural networks (Esteva et al.
2017). Hence, machine learning and deep learning, methods of AI, will not only alleviate a great portion of physicians’ workload but will also provide more accurate clinical prognoses and enhance diagnostic accuracy (Obermeyer and Emanuel
2016). This triad of features ultimately contributes to ML enhancing patients’ outcomes with the adoption of AI in healthcare.
An industry example of an AI application currently in use for global health is IDx, a US-based startup. The company has succeeded in building the first and only FDA authorized AI system for the autonomous detection of retinopathy in adults with diabetes, namely IDx-DR (FDA,
2018). The shift towards AI oriented efforts is also being demonstrated by academia,
e.g., with the foundation of the Stanford Institute for Human-Centered Artificial Intelligence in early 2019. Other academic examples include free online courses, like MIT’s course “Global Health Informatics to Improve Quality of Care” on edX (Celi
2019) or Stanford’s Andrew Ng’s course in Machine Learning on Coursera, enabling anyone to gain an understanding of health informatics and how to leverage Big Data (Ng
2011).
However, despite all the excitement and the predicted opportunities for bettering healthcare using AI, bringing ML algorithms from a laboratory to the bedside remains a major challenge. Regulatory and ethical issues, such as confirmatory bias or reduced patient safety, have been discussed involving the routine use of AI, with uncertainty regarding the point of sufficient performance of a program and accountability in the event of a medical error (Dankwa-Mullan et al.,
2018). The absence of such controlling mechanisms to date raises questions as to whether the use of AI, though it may solve problems and enhance care delivery, may again create new unintended problems, such as reducing efficiency (Yu and Kohane
2018) or questionable clinical success rates of new algorithms, leading to concerns for patient safety.