Arguments for recognizing digital health data controllers as fiduciaries
In this section, I present three main arguments for recognizing the relationship between digital health data controllers and users sharing their health data as fiduciary: (a) the relationship shares features with traditional fiduciary relationships; (b) the relationship involves circumstances similar to those that have led to establishing fiduciary relationships in the past; and (c) fiduciary law is better suited than contractual law in protecting user privacy and enabling trust required for sharing health data with data controllers.
Before I expand on the arguments for recognizing health data controllers as fiduciaries, however, it is important to discuss what is meant by ‘health data’. As discussed before, previous legislations in the developed world have afforded a higher level of privacy protection for health data (Bywater and Armstrong
2015; Terry
2012). Yet, these legislations, such as the EU data protection directive (DPD), which has now been superseded by GDPR, do not define health data (Bywater and Armstrong
2015). Defining health data can be particularly hard in the present context, where ‘health’ apps collect a variety of data (such as location data) which may or may not reveal the health status of a person. While providing a full discussion on the definition of health data, and its precise formulation, is beyond the scope of this paper, the definition proposed by Article 29 Working Party (
2015) is useful. According to this proposal, personal data qualifies as health data when it meets at least one of the following criteria:
1.
It is clearly/inherently medical data
2.
It is raw sensor data which can be independently, or in combination with other data, used to draw conclusions about health status or health risk of an individual
3.
It allows for reasonable conclusions to be drawn about an individual’s health risk or health status, irrespective of accuracy, legitimacy or adequacy of these conclusions
6.
One problem with this definition, which the Article 29 working party also notes, is that it may make the definition of health data seem too broad. Given the argument of this paper, one might worry that such a broad definition would impose fiduciary duties on an overly wide range of data controllers (Article 29 Working Party
2015). One important merit of this definition, however, is that it is able to include data controllers who collect data outside traditional healthcare settings. This is crucial as in this digital age a lot of health data, worthy of protection, is collected outside traditional health settings.
In order to reach a balance between not making the definition too broad, while also included data controllers who collect data through, say, mobile apps and wearable devices, I propose that fiduciary duties be imposed on health data controllers who (a) Process data with the intention of using the data to determine the health status of a specific person
7, or (b) Collect raw data in situations where it will be reasonable for a data subject to conclude that the data is being collected to determine their health status. The first criterion is to ensure that raw data which may not seem to be health related in an obvious way, but is then used in a way that the health status of the data subject is revealed, is also protected. Raw data, which may not seem like health data, when collected over long periods of time, or combined with other data, for example, may reveal the health status of specific individuals and needs protection. At the same time, according to this criterion, data controllers who process such raw data, but do not intend to use it to determine the health status of a specific person, would not be charged with fiduciary duties. Yet, there is a risk here that some data controllers may collect sensitive health data, which would be worthy of protection, but claim that they do not intend to use it to determine the health status of specific subjects. This could, for example, be the case with data collected through sensors on mobile or wearable devices, where the data subjects may reasonably conclude that the data is collected for health related purposes (because, for example, the marketing of the device may suggest that data is being collected in the interest of individual or public health). The second criterion I have proposed plugs this loophole.
With this working definition, I argue that digital health data controllers share features of traditional fiduciaries in that they offer socially desirable services and enjoy a significant advantage over the users from whom they collect health data. There is an asymmetrical relationship between the users and the digital health data controllers, as users typically lack expertise, information about digital health data controllers as well as information about the actions digital health data controllers might take with the user data. This vulnerability of the users relative to the digital health data controllers can be seen as grounds for establishing a fiduciary relationship, as has been argued by some scholars and courts.
As discussed earlier, fiduciary relationships are also established on grounds of enabling trust. Digital health data controllers, in some cases, also put themselves forward as trustworthy organizations that will not misuse user data and present themselves as acting in the interest of their users (for example, Fitbit Privacy Policy
2016). At the same time, digital health data controllers do not disclose full details about their handling of our data [and sometimes for good reasons such as security (as disclosing detailed data security measures can be jeopardizing) and competitiveness]. This incomplete disclosure, coupled with the high costs of transparency, can create a lack of trust among the users, eventually leading to non-participation (by not sharing data, for example) in the promised digital health revolution. Fiduciary relationships between the users and digital health data controllers, where the latter is required to act in the interests of the users, can therefore, be valuable in making data controllers trustworthy and facilitating collective participation.
The need for establishing trust and compensating for vulnerability, however, as argued earlier, may not be sufficient for establishing fiduciary relationships, even though they may have advantages. The third and most important reason for establishing fiduciary relationships between data subjects and data controllers with whom health data is shared, I argue, is that fiduciary relationships are better suited than contractual or statutory obligations [such as those associated with privacy agreements users click ‘agree’ on their digital devices (contractual) or defined through legislation (statutory)], for protection of user privacy or for balancing protection of privacy with other goals related to societal interests.
To this end, I argue that stringent privacy protection is difficult to achieve through prescriptive legal measures, such as those possible through contracts or privacy agreements. Even if the users were able to afford the costs of transparency, and give informed consent for the use of their data, the changing nature of technology would still leave the door open for privacy harms and opportunism by those who want to cause these privacy harms. As discussed earlier, fiduciary law, as opposed to contracts, affords the kind of deliberative and strategic interaction required to guard against the opportunists. Privacy is contextual, and depends on multiple factors, such as the nature of information, the context it is shared in, prospective users of that information, etc. (Nissenbaum
2011; Solove
2007). Fiduciary law allows for the flexibility required to cater to the contextual nature of privacy. Here, I will use security, anonymization and data minimization as examples of contextualization and flexibility required to deal with privacy issues. These, however, are just examples, and not an exhaustive list of cases where decisions and methods for privacy protection require contextualization.
Securing user data, an integral aspect of privacy protection, requires diligence and regular upgrading of security measures against cyberattacks and hacks. The recent case of the cyberattacks on the UK’s National Health Service computers with the ransomware WannaCry is a case in point (Martin et al.
2017). Systems were largely found vulnerable because of a failure to upgrade software, rendering them unable to cope up with the ransomware (Martin et al.
2017). Health data, as discussed before, is particularly valuable to cyber attackers and healthcare is, therefore, one of the most targeted sectors in terms of cyberattacks (Athinaiou
2017; Martin et al.
2017). Securing health data, thus, requires diligent measures, which can guard against an opportunist hacker who may exploit vulnerabilities in a digital system. Data security may also require some secrecy or incomplete disclosure of data security policies (to keep them secret from hackers, for example). It can be difficult to counter such opportunism through use of contracts which specify what steps health data controllers need to take to secure user data, as it will be hard to anticipate all future contingencies (such as new tools for hackers or changes in security technologies). Fiduciary law, on the other hand, because of its open-ended approach and deliberative requirements (through the duty of loyalty) can be helpful in ensuring that health data controllers take appropriate measures to secure user health data. Fiduciary law can also help increase data sharing by not prescribing expansive security requirements for controllers who are collecting less sensitive or easily securable data.
Another crucial aspect of privacy protection for electronic data is anonymization (Ohm
2009). Anonymization aims to make re-identification of data subjects impossible, such that data can be shared for useful purposes, in an aggregated form, without the risks of privacy harms. The importance of anonymization or de-identification (either one or both), has also been recognized in and embedded into legislation, such as through the European Union’s GDPR and HIPAA in the United States (Hintze
2017; Yakowitz
2011). These laws often prescribe techniques for anonymization, such as removal of personal identifiers (such as names, phone numbers, social security numbers, etc.) (Ohm
2009; Yakowitz
2011). However, recent studies have shown that such prescriptive techniques may not be adequate, as computer scientists were able to re-identify individuals from anonymized data stripped of personal identifiers (Narayanan and Felten
2014; Ohm
2009). Stringent anonymization may therefore, require contextualization such that data is also stripped of indirect identifiers or is randomized, depending upon the kind of data that is collected (Ohm
2009). Further, the risk of re-identification may not be the same for all kinds of data, and for some data, it may be enough to apply techniques that make re-identification complex enough to take away the incentives for re-identification (Yakowitz
2011).
Another problem with prescribing anonymization through legal measures is that anonymization may not even be desirable for some kinds of data. Evans (
2011) points out that anonymization may render linking data longitudinally impossible. Longitudinal health data, collected across different health environments, can be invaluable in generating insights for an individual as well as on a more general level, for example, by helping researchers determine the correlations between different biological factors and enable more organized efforts to tackle health and social problems (Evans
2011; Holman et al.
2008). Requiring anonymization for all health data may take away the opportunity to assemble longitudinal data for research as well as for other uses wherein the data subject may benefit without serious threats to their privacy.
Thus, as in the case of securing user data, anonymization too requires contextual decision making. Such contextual decisions can be hard to codify in the form of contracts, which would have to anticipate all future contingencies in all possible contexts. As fiduciaries, digital health data controllers would be able to make contextual decisions about anonymization, where they can decide whether or not anonymization is needed, and to what degree.
Finally, as a third example of the advantages of a contextual approach to privacy, consider data minimization. Data minimization as a principle has also been included in the GDPR and states that data must be “limited to what is necessary in relation to the purposes for which they are processed” (Art. 5 GDPR n.d.). In addition to the scope of data collected, the minimization principle within GDPR also relates to the time for which it is retained and stored (Recital 39 n.d.; Zarsky
2016). The minimization principle can be important in protecting user privacy by limiting the opportunities for collecting irrelevant data as well as minimizing cyber security risks by requiring controllers to delete data when no use is intended. However, in the age of big data analytics, an ex-ante analysis of the relevance of data and restrictions on its retention can severely limit the benefits of big data analytics. This has also been noted by other commentators [see Zarsky (
2016)] while some have also predicted that a requirement such as data minimization is likely to be breached (Rubinstein
2012)
8. Again, a contextual approach to privacy, as made possible through a fiduciary approach, can achieve a better balance between privacy protection and achieving benefits of big data analytics, by loosening the data minimization or replacing it by achieving the intended effects of minimization through other means wherever necessary. While contractual law and statutory law (such as GDPR) also can (and do, in case of GDPR
9) have context-sensitive features, a fiduciary approach can enable more flexibility in fulfilling data controllers’ obligation of protecting user privacy, particularly in allowing data controllers to choose the most appropriate method of doing so while ignoring recommendations that may be counter-intuitive or disadvantageous in the given context.
Further, as fiduciaries, digital health data controllers would not only be required to take a contextual approach to privacy protection, but also not deceive or actively harm the data subject in pursuing their obligation to protect the privacy of data subjects. This is an advantage over contract or statutory law, which may leave room for opportunistic or deceptive behavior on part of data controllers [see for example Wachter (
2018) and Zarsky (
2016) for examples of loopholes in GDPR which data controllers might use for their benefit and which may deny rights to data subjects exposing them to risks].
Here, I have outlined how fiduciary relationships between health data subjects and health data controllers can enable collective participation by ensuring better decisions are made concerning data on behalf of the users. Fiduciary relationships not only compensate for the high costs of transparency, but are also better suited than alternative approaches as they can flexibly contextualize privacy (and privacy protection).
Nature and scope of duties and obligations that health data controllers should have as fiduciaries
As argued in “
The nature and scope of fiduciary duties” section, central to fiduciary law is the duty of loyalty, which primarily dictates that fiduciaries must keep the interests of the beneficiaries at the forefront. Yet, as argued earlier, scholars and courts do not share a consensus on the scope of such a duty, that is, how far should the fiduciaries go in pursuit of beneficiaries’ interest. The duty of loyalty can range, for example, from avoiding conflict of interest to an affirmative devotion towards the beneficiary.
The abstract and open-ended nature of fiduciary duty of loyalty, however, as I argued earlier, does not render it an empty vessel. Rather, it opens up the possibility for pluralism within fiduciary law and for more precise formulations of specific fiduciary relationships. At the same time, if the duty is too expansive, within a specific fiduciary relationship, then the duty will be too difficult to carry out. The open-ended and abstract nature of fiduciary duty, therefore, needs to be balanced with specificity about the interests of the beneficiary that the fiduciary should pursue within a specific fiduciary relationship.
The proposal that we specify the scope of fiduciary duty, such that there are bounds to fiduciary loyalty, is not unique and is also applied to traditional fiduciaries. For example, physicians are not expected to be loyal to their patients at all costs. A physician, for example, is only obligated to provide care to a patient at a reasonable time and place (for instance, a physician is not obligated to attend to night or house calls) (Mehlman
2015).
Courts also recognize similar limits to the degree of loyalty physicians owe to their patients. That is, while the physician is expected to keep the patient’s interest ahead of his own interest, courts recognize that there should be reasonable limits to the expectation of such loyalty from the physician. For example, while physicians cannot deny treatment pending assurance of payment in urgent situations, they may terminate their relationship unilaterally (even on financial grounds) with the patient as long as the patient is given notice and reasonable opportunity to get treatment elsewhere (Mehlman
2015).
It is therefore, important to specify the bounds of fiduciary duty that health data controllers have towards their data subjects. First, as an essential part of the duty of loyalty, health data controllers should not use information collected by them to harm individuals, for example, by harassing, exploiting, embarrassing or manipulating them. Beyond this primary requirement, I argue here that the duty of loyalty for health data controllers should be specifically about the protection of privacy of the users sharing their health data. Catering to individual privacy concerns is an important step in enabling trust within the users to share their data, and thus, opening the way for collective participation in the digital health revolution. As argued, in the previous section, privacy protection requires contextualization, wherein the type of data as well as the technologies involved in the collection, storage and sharing are taken into consideration. The duty of loyalty, aimed specifically at protecting the privacy of users, thus, still requires deliberation and diligence on the part of health data controllers.
At the same time, defining the duty of loyalty as specifically aimed at privacy protection avoids the danger of making the duty too expansive, and the corresponding difficulties of carrying out such a duty. An expansive duty of loyalty, such as one requiring a general affirmative devotion to the user, might take away the incentive for health data controllers to invest in digital health technologies, and thus, hamper the path towards a better healthcare system.
For example, an alternative possibility to the scope of fiduciary loyalty proposed here, would be to require that fiduciaries go beyond protection of privacy, and also ensure that a broader or general set of interests of the users are kept at the forefront when sharing health data with third parties, even in anonymized and de-identified form. This would, for example, require that data is shared only for purposes that are beneficial to the users. Such an expansive requirement, however, would put too much burden on health data controllers to evaluate the outcomes of the data shared by them with the third parties. It would also significantly reduce the incentive for health data controllers to share data, even in an anonymized form, for health research, as that might open up a possibility for claims of a breach by users who may not find the aim or outcomes of the research in their interests. This is not to argue that health data controllers should be allowed to share data with any third party. Rather, the lawful basis of sharing data with third parties should not be determined solely through fiduciary duties (which could lead to a more abstract and expansive definition of fiduciary duties), but also through legal instruments such as those already implemented (Long
2017).
Fiduciary breach vs medical malpractice
In the previous part of this section, I claimed that health data controllers themselves should not use information collected by them to harm individuals, for example, by harassing, embarrassing or manipulating them. Not causing harm to the beneficiary is an essential part of fiduciary duty and without such a requirement, users would not be able to trust the health data controllers, even if they are assured that their data would not be shared with third parties in an identifiable form. However, I claim here that a distinction should be made between harms caused by medical advice provided by health data controllers and other harms where health data controllers use the data provided by users against them (for example, to harass, manipulate or embarrass them). Harms caused by medical advice by health data controllers, I argue, should be classified as medical malpractice, similar to how the law treats harmful or bad medical advice by physicians. In the following paragraphs, I provide the arguments for why such a distinction should be made and in particular, why the distinction is important for the future of digital health.
With the use of big data and machine learning algorithms, digital health apps not only collect and monitor health data but also offer personalized advice to the users (Higgins
2016). This phenomenon of impending reliance upon machine learning algorithms for health advice (as well as diagnosis and treatments) is referred to as “black-box” medicine (Ford and Price
2016). A key feature of black-box medicine is its opacity, as the amount of data involved and the complexity of algorithms, make it hard for humans to know exactly how the algorithms work (Ford and Price
2016).
The algorithms involved in black-box medicine rely upon using machine learning techniques to find underlying patterns in a large quantity of data. The large datasets required for accurate algorithms, however, will take time to assemble, and in the early stages of black-box medicine, as we stand now, these algorithms maybe prone to errors (Price
2017a). These errors demand a careful set of regulations and legal instruments to protect the users, and this has attracted the attention of regulatory bodies in the developed world, such as the Food and Drug Administration (FDA) in the United States (Price
2017b).
Regulating black-box medicine, however, can be quite challenging and there are risks involved in both, under-regulation and overregulation (Price
2017b)
10. While under-regulation runs the risk of leaving the users exposed and vulnerable to medical harms, the risks of over-regulation come in the form of cost to innovation (Price
2017b). Requiring strict criteria for verification of black box algorithms may significantly increase the hurdles to get such products to the market, and thus, forestall the possibility of algorithmic medicine to improve the health care system. Further, verification of algorithms used in black-box is difficult in most cases, and even impossible in some (Ford and Price
2016)
11.
While fiduciary law could be used to force health data controllers to take steps to design error free algorithms, such a move may not only disincentivise investment into digital health technologies, it may also be impractical. The risk of being found guilty of a fiduciary breach may force companies to abandon algorithmic medicine, as guaranteeing an error free algorithm may not be possible. Further, there is also a risk that users may claim a fiduciary breach (on account of a health data controller not being loyal) even when there is no harm or when the degree of harm is too small. The argument here is not that health data controllers should not be held accountable for the algorithms they develop and use, rather that the harms caused by those algorithms, in the medical context, should be treated similar to medical malpractice and resolved through other legal instruments. Price (
2017a), for example, argues that laws such as medical liability litigation can and should be used for accountability of algorithmic medicine.
Again, the proposal to make a distinction between fiduciary harms and medical malpractice is not unique to algorithmic medicine. The said distinction is also applicable, under current law, for physicians (Mehlman
2015). Courts make a distinction between medical malpractice and fiduciary harms caused to the patient. For injuries caused by sub-standard care (including wrong or bad medical advice), as well as to deter unreasonable or unprofessional behavior by physicians, medical liability law is applied, with physicians being tried for medical malpractice (Mehlman
2015; Price
2017a). In contrast, fiduciary law is usually reserved for protection of patient confidentiality and for rare cases of physicians’ acting purely out of self-interest (Drozd and Dale
2006; Mehlman
2015).
As I have discussed through this paper, fiduciary law is abstract and open-ended. Applying fiduciary law to regulate algorithmic medicine would be detrimental for the progress of and innovation within the field of algorithmic medicine, which at least in theory, and with other instruments of regulation, can have significant positive effects on the state of healthcare. The scope of fiduciary duties for data health controllers defined here attempts to find a balance between protecting individual interests, by addressing privacy concerns, and collective interests of getting valuable insights about human health as well as well as facilitation of research and innovation required for gathering such insights.
Finally, it should be noted that although through this section I have tried to specify the bounds of fiduciary duty of loyalty, the courts would have an important role in contextual interpretations of these bounds, and in deciding whether a fiduciary breach has taken place or not. This is not a limitation, but rather an important aspect of fiduciary law, which can push the fiduciary to go beyond what can be defined by contractual law in protecting the interests of the beneficiary. As discussed in an earlier example, fiduciary duties are better suited than statutory or contractual obligations to ensure that health data controllers take appropriate data security measures to protect user data from hackers. At the same time, data breaches may happen due to happen vulnerabilities beyond the control of health data controllers
12, leaving it upon courts to decide whether, for a particular case, the security breach also amounts to a fiduciary breach or not.