Alongside establishing material goals, the AIDP outlines a specific desire for China to become a world leader in defining ethical norms and standards for AI. Following the release of the AIDP, the government, public bodies, and industry within China were relatively slow to develop AI ethics frameworks (Lee
2018; Hickert and Ding
2018). However, there has been a recent surge in attempts to define ethical principles. In March 2019, China’s Ministry of Science and Technology established the National New Generation Artificial Intelligence Governance Expert Committee. In June 2019, this body released eight principles for the governance of AI. The principles emphasised that, above all else, AI development should begin from enhancing the common well-being of humanity. Respect for human rights, privacy and fairness were also underscored within the principles. Finally, they highlighted the importance of transparency, responsibility, collaboration, and agility to deal with new and emerging risks (Laskai and Webster
2019).
In line with this publication, the Standardization Administration of the People’s Republic of China, the national-level body responsible for developing technical standards, released a white paper on AI standards. The paper contains a discussion of the safety and ethical issues related to technology (Ding and Triolo
2018). Three key principles for setting the ethical requirements of AI technologies are outlined. First, the principle of human interest states that the ultimate goal of AI is to benefit human welfare. Second, the principle of liability emphasises the need to establish accountability as a requirement for both the development and the deployment of AI systems and solutions. Subsumed within this principle is transparency, which supports the requirement of understanding what the operating principles of an AI system are. Third, the principle of consistency
of [sic] rights and responsibilities emphasised that, on the one hand, data should be properly recorded and oversight present but, on the other hand, that commercial entities should be able to protect their intellectual property (Ding and Triolo
2018).
Government affiliated bodies and private companies have also developed their own AI ethics principles. For example, the Beijing Academy of Artificial Intelligence, a research and development body including China’s leading companies and Beijing universities, was established in November 2018 (Knight
2019). This body then released the ‘Beijing AI Principles’ to be followed for the research and development, use, and governance of AI (“Beijing AI Principles”
2019). Similar to the principles forwarded by the AIDP Expert Committee, the Beijing Principles focus on doing good for humanity, using AI ‘properly’, and having the foresight to predict and adapt to future threats. In the private sector, one of the most high-profile ethical framework has come from the CEO of Tencent, Pony Ma. This framework emphasises the importance of AI being available, reliable, comprehensible, and controllable (Si
2019). Finally, the Chinese Association for Artificial Intelligence (CAII)
10 has yet to establish ethical principles but did form an AI ethics committee in mid-2018 with this purpose in mind (“AI association to draft ethics guidelines”
2019).
The aforementioned principles bear some similarity to those supported in the Global North (Floridi and Cowls
2019), yet institutional and cultural differences mean that the outcome is likely to be significantly different. China’s AI ethics needs to be understood in terms of the country’s culture, ideology, and public opinion (Triolo and Webster
2017). Although a full comparative analysis is beyond the scope of this article, it might be anticipated, for example, that the principles which emerge from China place a greater emphasis on social responsibility and group and community relations, with relatively less focus on individualistic rights, thus echoing earlier discussions about Confucian ethics on social media (Wong
2013).
In the following sections, we shall focus on the debate about AI ethics as it is emerging in connection with privacy and medical ethics because these are two of the most mature areas where one may grasp a more general sense of the current ‘Chinese approach’ to digital ethics. The analysis of the two areas is not meant to provide an exhaustive map of all the debates about ethical concerns over AI in China. Instead, it may serve to highlight some of the contentious issues that are emerging, and inform a wider understanding of the type of boundaries which may be drawn in China when a normative agenda in the country is set.
4.1 Privacy
All of the sets of principles for ethical AI outlined above mention the importance of protecting privacy. However, there is a contentious debate within China over exactly what types of data should be protected. China has historically had weak data protection regulations—which has allowed for the collection and sharing of enormous amounts of personal information by public and private actors—and little protection for individual privacy. In 2018, Robin Li, co-founder of Baidu, stated that ‛the Chinese people are more open or less sensitive about the privacy issue. If they are able to trade privacy for convenience, safety and efficiency, in a lot of cases, they are willing to do that’ (Liang
2018). This viewpoint—which is compatible with the apparently not too negative responses to the Social Credit System—has led some Western commentators to misconstrue public perceptions of privacy in their evaluations of China’s AI strategy (Webb
2019). However, Li’s understanding of privacy is not one that is widely shared, and his remarks sparked a fierce backlash on Chinese social media (Liang
2018). This concern for privacy is reflective of survey data from the Internet Society of China, with 54% of respondents stating that they considered the problem of personal data breaches as ‘severe’ (Sun
2018). When considering some cases of data misuse, this number is unsurprising. For example, a China Consumers Association survey revealed that 85% of people had experienced a data leak of some kind (Yang
2018). Thus, contrary to what may be inferred from some high-profile statements, there is a general sentiment of concern within the Chinese public over the misuse of personal information.
As a response to these serious concerns, China has been implementing privacy protection measures, leading one commentator to refer to the country as ‛Asia’s surprise leader on data protection’ (Lucas
2018). At the heart of this effort has been the Personal Information Security Specification (the Specification), a privacy standard released in May 2018. This standard was meant to elaborate on the broader privacy rules, which were established in the 2017 Cyber Security Law. In particular, it focused on both protecting personal data and ensuring that people are empowered to control their own information (Hong
2018). A number of the provisions within the standard were particularly all-encompassing, including a broad definition of sensitive personal information, which includes features such as reputational damage. The language used in the standard led one commentator to argue that some of the provisions were more onerous than those of the European Union's (EU) General Data Protection Regulation (GDPR) (Sacks
2018).
Despite the previous evidence, the nature of the standard means that it is not really comparable to the GDPR. On the one hand, rather than being a piece of formally enforceable legislation, the Specification is merely a ‘voluntary’ national standard created by the China National Information Security Standardization Technical Committee (TC260). It is on this basis that one of the drafters stated that this standard was not comparable to the GDPR, as it is only meant as a guiding accompaniment to previous data protection legislation, such as the 2017 Cyber Security law (Hong
2018). On the other hand, there remains a tension that is difficult to resolve because, although it is true that standards are only voluntary, standards in China hold substantive clout for enforcing government policy aims, often through certification schemes (Sacks and Li
2018). In June 2018, a certification standard for privacy measures was established, with companies such as Alipay and Tencent Cloud receiving certification (Zhang and Yin
2019). Further, the Specification stipulates the specificities of the enforceable Cybersecurity Law, with Baidu and AliPay both forced to overhaul their data policies due to not ‘complying with the spirit of the Personal Information Security Standard’ (Yang
2019).
In reality, the weakness in China’s privacy legislation is due less to its ‘non-legally binding’ status and more to the many loopholes in it, the weakness of China’s judicial system, and the influential power of the government, which is often the last authority, not held accountable through democratic mechanisms. In particular, significant and problematic exemptions are present for the collection and use of data, including when related to security, health, or the vague and flexibly interpretable ‘significant public interests’. It is these large loopholes that are most revealing of China’s data policy. It may be argued that some broad consumer protections are present, but actually this is not extended to the government (Sacks and Laskai
2019). Thus, the strength of privacy protection is likely to be determined by the government’s decisions surrounding data collection and usage, rather than legal and practical constraints. This is alarming.
It is important to recognise that the GDPR contains a similar ‘public interest’ basis for lawfully processing personal data where consent or anonymisation are impractical and that these conditions are poorly defined in legislation and often neglected in practice (Stevens
2017). But the crucial and stark difference between the Chinese and EU examples concerns the legal systems underpinning the two approaches. The EU’s judicial branch has substantive influence, including the capacity to interpret legislation and to use judicial review mechanisms to determine the permissibility of legislation more broadly.
11 In contrast, according to the Chinese legal system, the judiciary is subject to supervision and interference from the legislature, which has de jure legislative supremacy (Ji
2014); this gives de facto control to the Party (Horsley
2019). Thus, the strength of privacy protections in China may be and often is determined by the government’s decisions surrounding data collection and usage rather than legal and practical constraints. As it has been remarked, ‘The function of law in governing society has been acknowledged since 2002, but it has not been regarded as essential for the CCP. Rather, morality and public opinion concurrently serve as two alternatives to the law for the purpose of governance. As a result, administrative agencies may ignore the law on the basis of party policy, morality, public opinion, or other political considerations’ (Wang and Liu
2019, p. 6).
When relating this back to AI policy, China has benefited from the abundance of data that historically lax privacy protections have facilitated (Ding
2018). On the surface, China’s privacy legislation seems to contradict other development commitments, such as the Social Credit System, which requires extensive personal data. This situation creates a dual ecosystem whereby the government is increasingly willing to collect masses of data, respecting no privacy, while simultaneously admonishing tech companies for the measures they employ (Sacks and Laskai
2019). Recall that private companies, such as the AI National Team, are relied upon for governance at both a national and local level, and therefore may receive tacit endorsement rather than admonishment in cases where the government’s interests are directly served. As a result, the ‘privacy strategy’ within China appears to aim to protect the privacy of a specific type of consumer, rather than that of citizens as a whole, allowing the government to collect personal data wherever and whenever it may be merely useful (not even strictly necessary) for its policies. From an internal perspective, one may remark that, when viewed against a backdrop of high levels of trust in the government and frequent private sector leaks and misuses, this trade-off seems more intelligible to the Chinese population. The Specification has substantial scope for revising this duality, with a number of loopholes being closed since the initial release (Zhang and Yin
2019), but it seems unlikely that privacy protections from government intrusion will be codified in the near future. The ethical problem remains unresolved.
4.2 Medical ethics
Medical ethics is another significant area impacted by the Chinese approach to AI ethics. China’s National Health Guiding Principles have been central to the strategic development and governance of its national healthcare system for the past 60 years (Zhang and Liang
2018). They have been re-written several times, as the healthcare system has transitioned from being a single-tier system, prior to 1978, to a two-tier system that was reinforced by healthcare reform in 2009 (Wu and Mao
2017). The last re-write of the Guiding Principles was in 1996 and the following principles still stand (Zhang and Liang
2018):
(a)
People in rural areas are the top priority
(b)
Disease prevention must be placed first
(c)
Chinese traditional medicine and Western medicine must work together
(d)
Health affairs must depend on science and education
(e)
Society as a whole should be mobilised to participate in health affairs, thus contributing to the people’s health and the country’s overall development.
All five principles are relevant for understanding China’s healthcare system as a whole but, from the perspective of analysing the ethics of China’s use of AI in the medical domain, principles (a), (b), and (e) are the most important. They highlight that—in contrast to the West, where electronic healthcare data are predominantly focused on individual health, and thus AI techniques are considered crucial to unlock ‘personalised medicine’ (Nittas et al.
2018)—in China, healthcare is predominantly focused on the health of the population. In this context, the ultimate ambition of AI is to liberate data for public health purposes
12 (Li et al.
2019a,
b). This is evident from the AIDP, which outlines the ambition to use AI to ‘strengthen epidemic intelligence monitoring, prevention and control,’ and to ‘achieve breakthroughs in big data analysis, Internet of Things, and other key technologies’ for the purpose of strengthening community intelligent health management. The same aspect is even clearer in the State Council’s 2016 official notice on the development and use of big data in the healthcare sector, which explicitly states that health and medical big data sets are a national resource and that their development should be seen as a national priority to improve the nation’s health (Zhang et al.
2018).
13
From an ethical analysis perspective, the promotion of healthcare data as a public good throughout public policy—including documents such as Measures on Population Health Information and the Guiding Opinions on Promoting and Regulating the Application of Big Medical and Health Data (Chen and Song
2018)—is crucial. This approach, combined with lax rules about data sharing within China (Liao
2019; Simonite
2019), and the encouragement of the open sharing of public data between government bodies (“Outline for the Promotion of Big Data Development” 2015), promotes the collection and aggregation of health data without the need for individual consent, by positioning group beneficence above individual autonomy. This is best illustrated with an example. As part of China’s ‘Made in 2025’ plan, 130 companies, including ‘WeDoctor’ (backed by Tencent, one of China’s AI national champions) signed co-operation agreements with local governments to provide medical check-ups comprised of blood pressure, electrocardiogram (ECG), urine and blood tests, free of charge to rural citizens (Hawkins
2019). The data generated by these tests were automatically (that is with no consent from the individual) linked to a personal identification number and then uploaded to the WeDoctor cloud, where they were used to train WeDoctor’s AI products. These products include the ‘auxiliary treatment system for general practice’, which is used by village doctors to provide suggested diagnosis and treatments from a database of over 5000 symptoms and 2000 diseases. Arguably, the sensitive nature of the data can make ‛companies—and regulators—wary of overseas listings, which would entail greater disclosure and scrutiny’ (Lucas
2019). Although this, and other similar practices, do involve anonymisation, they are in stark contrast with the European and US approaches to the use of medical data, which prioritise individual autonomy and privacy, rather than social welfare. A fair balance between individual and societal needs is essential for an ethical approach to personal data, but there is an asymmetry whereby an excessive emphasis on an individualistic approach may be easily rectified with the consensus of the individuals, whereas a purely societal approach remains unethical insofar as it overrides too easily individual rights and cannot be rectified easily.
Societal welfare may end up justifying the sacrifice of individual rights as a means. This remains unethical. However, how this is perceived within China remains a more open question. One needs to recall that China has very poor primary care provision (Wu and Mao
2017), that it achieved 95% health coverage (via a national insurance scheme) only in 2015 (Zhang et al.
2018), it has approximately 1.8 doctors per 1000 citizens compared to the OECD average of 3.4 (Liao
2019), and is founded on Confucian values that promote group-level equality. It is within this context that the ethical principle of the ‘duty of easy rescue’ may be interpreted more insightfully. This principle prescribes that, if an action can benefit others and poses little threat to the individual, then the ethical option is to complete the action (Mann et al.
2016). In this case, from a Chinese perspective, one may argue that sharing of the healthcare data may pose little immediate threat to the individual, especially as Article 6 of the Regulations on the Management of Medical Records of Medical Institutions, Article 8 of the Management Regulations on Application of Electronic Medical Records, Article 6 of the Measures for the Management of Health Information, the Cybersecurity Law of the People’s Republic of China, and the new Personal Information Security Specification all provide specific and detailed instructions to ensure data security and confidentiality (Wang
2019). However, it could potentially deliver significant benefit to the wider population.
The previous ‘interpretation from within’ does not imply that China’s approach to the use of AI in healthcare is acceptable or raises no ethical concerns. The opposite is actually true. In particular, the Chinese approach is undermined by at least three main risks.
First, there is a risk of creating a market for human care. China’s two-tiered medical system provides state-insured care for all, and the option for individuals to pay privately for quicker or higher quality treatment. This is in keeping with Confucian thought, which encourages the use of private resources to benefit oneself and one’s family (Wu and Mao
2017). With the introduction of Ping An [sic] Good Doctor’s unmanned ‘1-min clinics’ across China (of which there are now as many as 1,000 in place), patients can walk in, provide symptoms and medical history, and receive an automated diagnosis and treatment plans (which are only followed up by human clinical advice for new customers), it is entirely possible to foresee a scenario in which only those who are able to pay will be able to access human clinicians. In a field where emotional care, and involvement in decision making, are often as important as the logical deduction of a ‘diagnosis,’ this could have a significantly negative impact on the level and quality of care accessed across the population and on the integrity of the self (Andorno 2004; Pasquale
2015),
14 at least for those who are unable to afford human care.
Second, in the context of a population that is still rapidly expanding yet also ageing, China is investing significantly in the social informatisation of healthcare and has, since at least 2015, been linking emotional and behavioural data extrapolated from social media and daily healthcare data (generated from ingestibles, implantables, wearables, carebots, and Internet of Things devices) to Electronic Health Records (Li et al.
2019a,
b), with the goal of enabling community care of the elderly. This further adds to China’s culture of state-run, mass-surveillance and, in the age of the Social Credit System, suggests that the same technologies designed to enable people to remain independent in the community as they age may one day be used as a means of social control (The Medical Futurist
2019), to reduce the incidence of ‘social diseases’—such as obesity and type II diabetes (Hawkins
2019)—under the guise of ‘improving peoples lives’ through the use of AI to improve the governance of social services (as stated in the AIDP).
The third ethical risk is associated with CRISPR gene modification and AI. CRISPR is a controversial gene modification technique that can be used to alter the presentation of genes in living organisms, for example for the purpose of curing or preventing genetic diseases. It is closely related to AI, as Machine Learning techniques can be used to identify which gene or genes need to be altered with the CRISPR method. The controversies, and potential significant ethical issues, associated with research in this area are related to the fact that it is not always possible to tell where the line is between unmet clinical need and human enhancement or genetic control (Cohen
2019). This became clear when, in November 2018, biophysics researcher He Jiankui revealed that he had successfully genetically modified babies using the CRISPR method to limit their chances of ever contracting HIV (Cohen
2019). The announcement was met by international outcry and He’s experiment was condemned by the Chinese government at the time (Belluz
2019). However, the drive to be seen as a world leader in medical care (Cheng
2018), combined with the promise gene editing offers for the treatment of diseases, suggest that a different response may be possible in the future (Cyranoski
2019; “China opens a Pandora’s Box”,
2018). Such a change in government policy is especially likely as global competition in this field heats up. The US has announced that it is enrolling patients in a trial to cure an inherited form of blindness (Marchione
2019); and the UK has launched the Accelerating Detection of Disease challenge to create a five-million patient cohort whose data will be used to develop new AI approaches to early diagnosis and biomarker discovery (UK Research and Innovation
2019). These announcements create strong incentives for researchers in China to push regulatory boundaries to achieve quick successes (Tatlow
2015; Lei et al.
2019). China has filed the largest number of patents for gene-editing on animals in the world (Martin-Laffon et al.
2019). Close monitoring will be essential if further ethical misdemeanours are to be avoided.