Skip to main content
Top
Published in:
Cover of the book

Open Access 2019 | OriginalPaper | Chapter

Should Artificial Intelligence Be More Regulated?

Panel Discussion

Activate our intelligent search to find suitable subject content or patents.

search-config
download
DOWNLOAD
print
PRINT
insite
SEARCH
loading …

Abstract

Artificial Intelligence (AI) can and does bring immense benefits in all sorts of areas. But it also introduces (new) risks. Is more regulation needed? In order to answer this question arguments pro and con were presented by four panel members and discussed and challenged by the audience. Many issues were raised, ethical principles, the obstacles that make it hard to draft good legislation. We don’t want to stifle innovation or deny society the benefits of these technologies by excessive regulation. A distinction is made between science (research) and the application of AI technologies. Comparisons with other sectors and technologies are made to see whether parallels can be drawn.

1 Introduction

Many discussions are taking place at the moment about Artificial Intelligence (AI), about ways AI may benefit mankind and about risks of AI. Autonomous cars, automated trust assignment to individuals, and autonomous weapons are only a few examples how AI can change our life. Some people warn us that AI can be even more dangerous than nuclear power. On the other hand it seems impossible and undesirable to stop development of AI and its applications. Thus, the question arises what should be the role of governments. Should AI be more regulated with respect to research and/or its usage? This question was addressed at WCC 2018 in a panel discussion with four panel members:
  • Ulrich Furbach, University Koblenz-Landau, Germany,
  • Eunika Mercier-Laurent, Lyon III University, France,
  • Chris Rees, British Computer Society, UK and
  • Jerzy Stefanowski, Poznan University of Technology, Poland.
An audience of 40 participants actively engaged in the discussion. The session was recorded on video [1].

2 Arguments Pro and Con

To start the debate, two panel members presented arguments in favor of more regulation and two members presented arguments against more regulation.
Arguments in Favour of (more) Regulation
AI can and does bring immense benefits in all sorts of areas: in cancer diagnosis, mind diseases, caring of elderly, and many more will follow such as autonomous vehicles. We don’t want to stifle innovation or deny society the benefits of these technologies by excessive regulation. Any regulation should be risk based. If the risk is low, regulation should be avoided. If it is high, the application should be regulated. That’s how it is in the non-AI world and the AI world should be no different. Further we have to recognize that drafting regulations for new and fast developing technologies such as AI is difficult. There is a risk of building assumptions and language in the regulation that don’t stand the test of time.
Our starting point is ethics. The implementation of AI systems, including of AI driven robotics, poses a number of ethical challenges. A non-exhaustive list includes:
  • reliability and safety of complex systems,
  • bias in systems and bias in the data,
  • black box systems that cannot explain or justify their decisions,
  • the allocation of responsibility for failure,
  • malicious use of AI and lethal autonomous weapon systems,
  • the destruction of jobs by AI,
  • the protection of privacy.
The question is: where can we rely on the ethical actions of developers and users of AI and where is this clearly not adequate and therefore regulation is necessary. Some of the applications of AI are in domains that are already regulated and have a long history of regulation, medicine and finance are two obvious examples. But the existing regulations may not cover the application of AI. These regulations may need to be enhanced to prevent harm to patients or unfair financial practices. Autonomous cars should not be allowed to go on the road until there is an agreed allocation of responsibility and therefore liability for harm. You need a third party insurance that covers the driver. But in an autonomous car there is no driver. So who is responsible/liable: the manufacturer of the car, the manufacturer of failing components, the car salesperson, the owner of the car? And what if the car is hacked? Or if software updates have not been installed? There is no doubt that existing regulations do not cover autonomous cars and that this is needed.
Many people have already asked for a ban on autonomous lethal weapons, comparable to nuclear and chemical and biological weapons. While controlling adherence to such a ban may be difficult and sometimes such weapons are used nevertheless, laws and regulations have a powerful effect on the public opinion.
Economists predict a growth in jobs due to AI, but only in the long term. In the short term jobs will be lost. Regulation may be needed to provide funding for retraining employees for a new job.
AI can be used to de-anonymize personal data that has been anonymized. GDPR may already be a good step in the right direction restoring the control over personal data to the owner instead of the company. However the protection in the GDPR against AI based use of personal data is weak and needs strengthening.
Machine learning and AI systems are complex systems. In a number of application areas we should think about regulation. Consider machine learning and AI systems as a product and therefore regulation is focused on the application of AI, on the product. In the medical domain we see systems that can make prognoses and by doing that impact people’s health. It is important that the system can not make errors. It is the task of the producer/vendor to take care of that. Compare it with the process to get a new drug (medicine) accepted. Strict procedures and tests take place before the new drug is allowed to be put on the market. Producers of AI should provide assurance that their product is working correctly and this should be enforced through regulation. Regulation does not always have to be laws, it also can be community agreed rules and processes or evaluation and certification. Another element concerns the question whether an AI system should be able to explain the decision it took. For some domains and applications this may not be necessary, for others it is, think about legal decisions (e.g. AI supported court cases). It should be mandatory for such systems to be able to explain. That may not be easy, when is an explanation clear enough, what is the context.
Another issue is intellectual property rights. Advanced systems can write poems and stories or compose music. Who owns this and benefits from the profits this might generate? Regulation may be needed to clarify such rights.
Arguments Against (more) Regulation
Regulation of AI may be undesirable and extremely difficult if not impossible. A number of questions support that position:
  • Regulation may work in a normal ethical society but how can we regulate a society that is composed of robots and humans.
  • If regulation is used to prevent machines from doing “something foolish”, who decides what is foolish?
  • We live in a business driven world. What will happen if we try to regulate the market giants? They will move to countries without regulation.
  • If you look at military use of AI, that is big business with powerful people behind it. Extremely difficult to regulate.
  • How will regulation be effective in data protection if people are willing to provide their data voluntarily to companies.
  • What about regulation and the creativity of the researcher. Efforts to regulate this without proper understanding how researchers work may lead to disasters.
  • How to regulate a robot from learning. How to say to a robot what he can and what he should not learn.
If regulation is needed, it could be considered to not only look at legal regulations but also at initiatives about principles. An example of these are the Asilomar AI Principles. Industry giants and experts such as Elon Musk and Stephen Hawking have advocated for humane and “safe” robotics. Along with hundreds of researchers and experts in the fields, they have proposed 23 “guiding principles” that will ensure the development of AI for the benefit of mankind [2, 3].
Although one might in principle be pro regulation of certain aspects of AI system, it simply seems to be impossible. The problem is we don’t know what it is, an AI system. AI is not a monolithic system, it is in other systems, it is in our cars, in our search engines, in our shopping cart. AI is a functionality of existing systems. It is completely impossible to control the development of these techniques and also it is impossible to control its use. Other areas that have been regulated also show this. Two examples from the weapons industry. Nuclear weapons, we all know how difficult it is in certain parts of the world to control the development of nuclear weapons. It is a highly political issue. The other example is chemical weapons. They are banned and nevertheless used. It is impossible to control the use of technology and it is impossible to control the development of technology. We also should not want this because technology is a driving force of our society. We want to learn more. We shouldn’t stop science or regulate science. There may be some exceptions with respect to ethical issues. We don’t know exactly what to regulate and how to regulate it. The United Nations does not succeed in getting a letter signed by all countries about the goal of a ban on lethal autonomous weapons. Some countries have major interests in such an industry or other arguments for not signing it.
Another example concerns autonomous cars. Where liability and insurance issues might be regulated there are also ethical issues. The German government drafted a report saying that an autonomous car should never be able to face an ethical dilemma situation. That is impossible, it would be similar to saying that human drivers should never face an ethical dilemma. Furthermore the report argues that algorithms should be checked and that self-adapting systems should not be applied in autonomous cars. That is also strange, an autonomous car should learn by driving and adapt. It is also unimaginable that a human driver would be forbidden to learn from mistakes.
In a distant past when cars were just introduced, there was a rule in the UK that in front of a car there should be someone walking with a red flag to warn people. Perhaps we should use a red flag to warn (or better: make people aware) that you are dealing with an AI system. A Blade Runner situation where it is difficult to distinguish humans from machines should be avoided. Regulation might be helpful for that.

3 Summary of the Debate

During the debate the arguments pro and con were both challenged and supported and some new issues were raised. This chapter provides a selection of the main topics discussed. Sometimes in a Q&A format, sometimes just as additional remarks.
AI tools, systems, technology could/should be regulated as human beings are also regulated. What about AI as a scientific discipline, should that be regulated as well? In a sense this is already regulated in the same way as other scientific disciplines like medicine and genetics. When applying for research funding for instance the request has to be judged on many aspects including ethical issues. What should not be regulated are the goals to pursue with research.
A comparison with the regulation of the Internet can be made. We now realize that we were too late thinking about regulation of the Internet when the Internet was created and that makes it difficult to repair it now. Maybe it is also due to the way scientists think. The benefits prevail, especially in areas where AI can assist professionals such as medical doctors who are already overburdened to take over part of the routine work. And focusing on the benefits, the risk of abuse of technology that is created with the best of intentions might be overlooked.
Regulations should be in place in certain areas but an additional question is who will be responsible for those regulations. If it is the lawmakers do they know enough about the topic to draft good regulations. A lot of poor law is written because of insufficient knowledge of the subject matter. Society/lawmakers lag behind with respect to technological developments. We as professionals at the forefront of these developments are better placed to judge where these developments might lead and what might be an appropriate societal safeguard long before the lawmakers can make those decisions. This means that we as an IT community have an obligation to engage with legislators to support them in drafting decent legislation. We should at least make an effort to be involved.
The issue was raised that a request for funding of scientific research usually has to pass via ethical committees, because funds are tax payers money. That is not the case for research and product development done by industry. AI as technology may perhaps not face ethical questions but the applications do. Is the current ethical oversight (for academic funding) sufficient? During many years of AI research ethical questions never popped up, we researched nice technology. Because there were no real-life applications. Now this is changing for instance with autonomous cars. And that introduces also the question of impact of the application. The impact of an application is often not or rarely assessed before selling or using it.
An interesting perspective was mentioned from a small and medium sized enterprise point of view. When you develop a new product, regulation in the beginning is an obstacle and difficult. However, it can also mean a benefit if you can advertise that your product meets certain regulations. And the competition has to keep up with that. Also good for consumers who can see that a product meets certain requirements as laid down in regulations.
Regulation in the globalized world is difficult. It is not enough to have regulations in a country or a region. However, so far regulation on a global scale, for instance via UN, is not successful. If we want to regulate a borderless development such as AI, we need to do that on a global scale otherwise it is meaningless. The statement that regulation will only work if it is on a global scale was challenged. Take for example the argument that companies will go to a country that is not regulated. That will not help them because they can maybe produce the product in such a country but if they want to sell it in a country that has regulation, it will not be able to do so unless it complies with the regulation. GDPR is a good example. US companies have to comply with GDPR if they want to do business in the EU. It can work well even if it is jurisdiction based.
Another talk at WCC about shifting identities triggered an issue of importance to AI. Identities touch a multidisciplinary field. Regulating this part of science also means that we need to be very clear where we want to go, what we want to be in the future. It was mentioned that perhaps a link can be made to the work on consciousness. Psychologists and philosophers are trying to find out what does it mean for humans to have consciousness. As developers of AI we are working on AI systems that have a kind of consciousness. A German philosopher Metzinger argues that we should never try to bring consciousness into an artificial system because then we would be able to bring harm to them and we are not allowed to do harm to other human beings.
It is our duty as IT professionals to explain AI to people. We should be able to understand how algorithms work. And to explain the choices that have been made in designing the algorithm and the effects the algorithm may have.

4 Conclusions and Follow up

While there was not a complete agreement on everything and the outcome of the discussion was not fully conclusive, in general a broad consensus could be noted on a number of issues.
Artificial Intelligence is a broad term that includes science, technology, applications and products. AI can bring benefits but it also can introduce (new) risks. The answer to the question “should it be (more) regulated” depends on a variety of aspects, it can’t be a yes or no. Consider things case by case. The term regulation is not precise, it can mean a law but it can also mean mutually agreed rules and procedures.
We do not have clear easy answers but we should make efforts and increase awareness. We should debate and work on documents to indicate critical points. We cannot control everything. You should know who you are talking to. Difficult, challenging but not a reason for not trying.
It is important as professionals to engage in discussions like this. We as an IT community have an obligation to engage with legislators to support them in drafting decent legislation.
We should develop methodologies to certify AI products. There is a role for IFIP and other professional societies to think about ways about how to define workflows for approving AI based products.
April 25th 2018 the European Commission issued a Communication COM (218) on Artificial Intelligence for Europe [4]. This Communication sets out a European initiative on AI, which aims to ensure an appropriate ethical and legal framework. The Commission will (selective quotes):
  • set a framework for stakeholders and experts to develop draft AI ethics guidelines, with due regard to fundamental rights;
  • issue a guidance document on the interpretation of the Product Liability Directive in light of technological developments. This will seek to ensure legal clarity for consumers and producers in case of defective products;
  • publish a report on the broader implications for, potential gaps in and orientations for, the liability and safety frameworks for AI, Internet of Things and robotics;
  • support research in the development of explainable AI and implement a pilot project proposed by the European Parliament on Algorithmic Awareness Building, to gather a solid evidence-base and support the design of policy responses to the challenges brought by automated decision-making, including biases and discrimination.
This Communication addresses precisely the issues raised in the discussion. It also invites stakeholders to participate in the efforts. Let’s contribute to these and other efforts in the world. There is a need and an opportunity for us as IT professionals to pick up the challenge and to continue the discussion. There is momentum now, let’s not waste the opportunity. We don’t want to observe in ten years’ time that we again missed the boat (after the Internet). We also have to research some fundamental questions, where do we want to go with regulation, where do we want to go with applications, who do we want to be. This is an appeal to all participants and readers who are interested in continuing this debate in a search for guidance. If you want to get involved, let me know. Send an e-mail to the address at the start of the paper.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Metadata
Title
Should Artificial Intelligence Be More Regulated?
Author
Leon Strous
Copyright Year
2019
DOI
https://doi.org/10.1007/978-3-030-15651-0_4

Premium Partner