Skip to main content
Top

2020 | OriginalPaper | Chapter

Ethical Issues, Cybersecurity and Automated Vehicles

Author : Sara Landini

Published in: InsurTech: A Legal and Regulatory View

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The digitalization of mobility and the associated increase of data are creating new requirements to be met by vehicle safety and infrastructure in a way to satisfy the requirements of the protection of personal rights and freedoms of data subjects. Automated and connected autonomous vehicles and driving systems (CAVs) require clear cybersecurity and data protection requirements. CAVs are under the obligation to perform their functions safely and reliably across national borders. As the automation and interconnectivity of driving functions increases, the issues of data encryption and cybersecurity will become more important. The rights to individual mobility data, which will emerge, will accordingly need to be clearly regulated.
This chapter aims to show the benefits and threats associated with the use of automated cars, starting from the definition of automation and self-learning machines. Interventions are reported in terms of legislation and guidelines in Community law. The chapter discusses ethical issues in relation to the use of autonomous vehicles (AVs) and cybersecurity aspects affecting the use of AVs. The chapter concludes by highlighting the importance of giving relevance to the decision-making autonomy of machines in regulation.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Footnotes
1
Nof (2009), pp. 13–52: The meaning of the term automation is reviewed through its definition and related definitions, historical evolution, technological progress, benefits and risks, and domains and levels of applications. A survey of 331 people around the world adds insights to the current meaning of automation to people, with regard to: What is your definition of automation?; Where did you first encounter automation in your life?; and What is the most important contribution of automation to society? The survey respondents include 12 main aspects of the definition in their responses; 62 main types of first automation encounter; and 37 types of impacts, mostly benefits but also two benefit–risks combinations: replacing humans, and humansʼ inability to complete tasks by themselves. The most exciting contribution of automation found in the survey was to encourage/inspire creative work; inspire newer solutions. Minor variations were found in different regions of the world. Responses about the first automation encounter are somewhat related to the age of the respondent, e.g., pneumatic versus digital control, and to urban versus farming childhood environment. The chapter concludes with several emerging trends in bioinspired automation, collaborative control and automation, and risks to anticipate and eliminate.
 
2
See Simon (1979). Simon was one of the pioneers of modern-day scientific domains like artificial intelligence, information processing, decision-making, problem-solving, organization theory, and complex systems. He was among the earliest to analyze the architecture of complexity and to propose a preferential attachment mechanism to explain power law distributions. With Allen Newell, he creates the Logic Theory Machine (1956) and the General Problem Solver (GPS) (1957) programs. GPS is the first method developed for separating problem solving strategy from information about particular problems.
 
3
Id.
 
4
The table on automated vehicles is in Pierini (2018) and Pillath (2016).
 
5
Smith (2013). Standards from SAE International are used to advance mobility engineering throughout the world. The SAE Technical Standards Development Program is now-and has been for nearly a century-among the organization’s primary provisions to those mobility industries it serves: aerospace, automotive, and commercial vehicle. Today’s SAE standards product line includes almost 10,000 documents created through consensus standards development by more than 240 SAE Technical Committees with 450+ subcommittees and task groups. These works are authorized, revised, and maintained by the volunteer efforts of more than 9000 engineers, and other qualified professionals from around the world. Additionally, SAE has 60 US Technical Advisory Group (USTAG’s) to ISO Committees. For additional information on the SAE Technical Standards Development Program, go to http://​www.​sae.​org/​standardsdev/​.
 
6
Bauman (2006), p. 55 ff.
The book deals with the passage from ‘solid’ to ‘liquid’ modernity has created a new and unprecedented setting for individual life pursuits, confronting individuals with a series of challenges never before encountered. Social forms and institutions no longer have enough time to solidify and cannot serve as frames of reference for human actions and long-term life plans, so individuals have to find other ways to organize their lives. They have to splice together an unending series of short-term projects and episodes that do not add up to the kind of sequence to which concepts like ‘career’ and ‘progress’ could meaningfully be applied. Such fragmented lives require individuals to be flexible and adaptable—to be constantly ready and willing to change tactics at short notice, to abandon commitments and loyalties without regret and to pursue opportunities according to their current availability. In liquid modernity, the individual must act, plan actions, and calculate the likely gains and losses of acting (or failing to act) under conditions of endemic uncertainty.
 
7
Parasuraman et al. (2000).
The model can be used as a starting point for considering what types and levels of automation should be implemented in a particular system. The model also provides a framework within which important issues relevant to automation design may be profitably explored. Ultimately, successful automation design will depend upon the satisfactory resolution of these and other issues.
 
8
Billings (1997), Calefato et al. (2008) and Endsley (1999).
 
9
About the problem of “multi-agents” in case of automation see Teubner (2018), p. 155 ff; Teubner (2019). He stressed on the importance to determine a financial entity able to compensate victims.
A problem correlated to the present is the possibility to recognize subjectivity to automated machine. See European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)):
General principles
T. whereas Asimov’s Laws(3) must be regarded as being directed at the designers, producers and operators of robots, including robots assigned with built-in autonomy and self-learning, since those laws cannot be converted into machine code;
U. whereas a series of rules, governing in particular liability, transparency and accountability, are useful, reflecting the intrinsically European and universal humanistic values that characterise Europe’s contribution to society, are necessary; whereas those rules must not affect the process of research, innovation and development in robotics;
V. whereas the Union could play an essential role in establishing basic ethical principles to be respected in the development, programming and use of robots and AI and in the incorporation of such principles into Union regulations and codes of conduct, with the aim of shaping the technological revolution so that it serves humanity and so that the benefits of advanced robotics and AI are broadly shared, while as far as possible avoiding potential pitfalls; (...)
Z. whereas, thanks to the impressive technological advances of the last decade, not only are today’s robots able to perform activities which used to be typically and exclusively human, but the development of certain autonomous and cognitive features – e.g. the ability to learn from experience and take quasi-independent decisions – has made them more and more similar to agents that interact with their environment and are able to alter it significantly; whereas, in such a context, the legal responsibility arising through a robot’s harmful action becomes a crucial issue;
AA. whereas a robot’s autonomy can be defined as the ability to take decisions and implement them in the outside world, independently of external control or influence; whereas this autonomy is of a purely technological nature and its degree depends on how sophisticated a robot’s interaction with its environment has been designed to be;…”
See Borges (2018), p. 977 ff.
 
10
Weinrib (1987), p. 407 ff.
The presence of an automated choice affects the process of determining the event and the effect of the choice. As we have seen, the interaction between algorithms and human action present different levels.
According to the theory of probability, the human agent can be held responsible for the action if it is proved that the action was caused with high probability by the human agent.
The problem is that such a vision does not consider the interaction between man and machine in causing the event.
If we take the hypothesis that a subject is acting using a semi-automated mechanism, where the computer selects the action and informs the human operator who can cancel the action and also pretend that the computer chooses an incorrect option and does not warn in time the person who is not able to intervene and avoid damage to third parties. It will not be enough to consider the probability that the computer error has caused the damage, but it will be necessary to verify that the user, in case of correct warning from the computer, would have acted differently.
Thus, we have a double counterfactual judgement: one concerning the human choice and another concerning the automated choice.
If it has been proven that the cause of the accident is the automated choice, it will still be necessary to consider whether the computer error is a production error or if the option chosen by the computer is linked to the combination of algorithms and to an evolution of such combination in a way that is autonomous from its own manufacturer.
If the action or omission of the machine does not refer to a human action or omission we must say that, on the causation proceeding, we are in the presence of an irresistible force that is neither imputable to the user nor to the manufacturer.
The term “force majeure” is frequently used to indicate causes that are outside the control of the parties, such as natural disasters, that could not be evaded through the exercise of due care. Force majeure is a circumstance that no human foresight could anticipate or which, if anticipated, is too strong to be controlled. Depending on the legal system, such an event may relieve the parties from the obligation to compensate damage.
The term “force majeure” comes from French but with regard to the present meaning, it is important to remember the German concept of höhere Gewalt. According to German jurisprudence, there is a höhere Gewalt if the event causing the damage has an external effect and the harm caused cannot be averted or rendered harmless by the extremely reasonable care. However, it must be noted that the French force majeure is not identical with the German höhere Gewalt. See Blaschczok (1998) and Jansen (2003). See also the German BGH Urteil vom 21. 8. 2012 – X ZR 146/11.
 
11
About this two point see Naylor (2017), pp. 175–185.
 
13
See CASUALTY ACTUARIAL SOCIETY, Automated Vehicles and the Insurance Industry — A Pathway to Safety: The Case for Collaboration, Spring 2018 53 https://​www.​casact.​org/​pubs/​forum/​18spforum/​01_​AVTF_​2018_​Report.​pdf. The paper indicates the following risk:
C1 - Driver Skill Deterioration: The more the technology is in control, the more out of practice individuals might become. Therefore, certain scenarios that individuals are able to handle today may result in an accident in the future. If the technology’s ability increases at a faster rate than the driver’s deteriorates, this may not pose much of a problem. However, manufacturers need to recognize the risk is dynamic. The situation needs constant monitoring as the risk minimization actions may change over time.
C2 - Pass-Off Risk: This is the risk that is created when the vehicle goes from technological control back to human control. This scenario could be triggered by the human choosing to take control or by the vehicle passing responsibility to the individual when it encounters a scenario it is unable to handle.
C3 - Other Driver Interaction: How other drivers, pedestrians, and bikers on the road react is also unknown. Drivers’ reactions can change based on their age, driving experience, familiarity with the technology, their mood, or almost any other factor.
C4 - Animal Hits: While accidents involving animals are included in the NMVCCS, the dataset appears to be insufficient extrapolation. State Farm estimates that there are over 1.2 million deer-vehicle collisions annually; 33 however, the NMVCCS’s extrapolated number of accidents involving animals is only 22,366 — or approximately 1.0 percent of all accidents. This could be due to NHTSA’s requirement that a police report be filed to be included in the data, and claimants may be less inclined to call the police in a single vehicle animal hit. The risks animals pose to vehicles varies dramatically by location and time of year. It’s also uncertain how the technology interacts with the animals. While it may be able to avoid some accidents, animals may be even more unpredictable than people. Residents in areas with significant animal populations will undoubtedly know someone who has had a deer run into the side of their car while driving. There’s nothing that can be done in times like these.
C5 - Hacking: The introduction of more technology in the vehicle may increase the risk that vehicles will be hacked. In the future, the risk of hacking may increase regardless of the vehicle’s automation.34 At this point, we do not know what hacking’s causes or risk factors may be. Operating in a city may increase the risk by exposing other drivers to the hacked vehicle. It may also decrease part of the risk by reducing the average speed and enabling emergency response teams to respond more quickly. More research will be required to properly evaluate the risk.
C6 - Random Errors: As stated in our assumptions, technological errors will still occur. However, their appearance will be random. Therefore, it is important that when an incident occurs, its severity minimized.
C7 - Unknown: It’s important to include a placeholder for unknown events. It’s impossible to predict everything that will happen. Therefore, we must accept the fact that there are things that we don’t know and cannot predict.
C8 - Incident Severity Risks: There are a number of factors that determine how severe an incident will be. By breaking the drivers into their respective risk components, we can create a risk management structure that minimizes severity of unpreventable incidents.
• Speed: The number one determinant of accident severity is the vehicle’s speed.
 
14
See Samuel (1959); Koza et al. (1996), pp. 151–170; Mitchell (1997), p. 2; Bishop (2006).
 
15
Bishop (2006).
 
16
Acosta (2018).
 
17
Business insider, The 3 biggest ways self-driving cars will improve our lives, (June 2016), http://​www.​businessinsider.​com/​advantages-of-driverless-cars-2016-6/​#traffic-and-fuel-efficiency-will-greatly-improve-2.
 
18
Digital Transformation Monitor, Autonomous cars: a big opportunity for European Industry, (2017), https://​ec.​europa.​eu/​growth/​tools-databases/​dem/​monitor/​sites/​default/​files/​DTM_​Autonomous%20​cars%20​v1.​pdf, 5.
 
20
COM(2018) 283 final “From the Commission to the European Parliament, the Council, the European Economic and Social Committee, the Committee of the Regions- On the road to automated mobility: An EU strategy for mobility of the future”.
 
21
As said in the introduction to the Guidelines: “This working document constitutes a draft of the AI Ethics Guidelines produced by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG), of which a final version is due in March 2019.
Artificial Intelligence (AI) is one of the most transformative forces of our time, and is bound to alter the fabric of society. It presents a great opportunity to increase prosperity and growth, which Europe must strive to achieve. Over the last decade, major advances were realised due to the availability of vast amounts of digital data, powerful computing architectures, and advances in AI techniques such as machine learning. Major AI-enabled developments in autonomous vehicles, healthcare, home/service robots, education or cybersecurity are improving the quality of our lives every day. Furthermore, AI is key for addressing many of the grand challenges facing the world, such as global health and wellbeing, climate change, reliable legal and democratic systems and others expressed in the United Nations Sustainable Development Goals.
Having the capability to generate tremendous benefits for individuals and society, AI also gives rise to certain risks that should be properly managed. Given that, on the whole, AI’s benefits outweigh its risks, we must ensure to follow the road that maximises the benefits of AI while minimising its risks. To ensure that we stay on the right track, a human-centric approach to AI is needed, forcing us to keep in mind that the development and use of AI should not be seen as a means in itself, but as having the goal to increase human well-being. Trustworthy AI will be our north star, since human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology.
Trustworthy AI has two components: (1) it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose” and (2) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.
These Guidelines therefore set out a framework for Trustworthy AI:
  • Chapter I deals with ensuring AI’s ethical purpose, by setting out the fundamental rights, principles and values that it should comply with.
  • From those principles, Chapter II derives guidance on the realisation of Trustworthy AI, tackling both ethical purpose and technical robustness. This is done by listing the requirements for Trustworthy AI and offering an overview of technical and non-technical methods that can be used for its implementation.”
  • Chapter III subsequently operationalises the requirements by providing a concrete but non-exhaustive assessment list for Trustworthy AI. This list is then adapted to specific use cases.
 
24
Beauchamp (2001); Floridi et al. (2018), pp. 689–707.
 
28
Murphy (2012).
 
29
§ 1a Motor vehicles with highly or fully automated driving function
(1)
The operation of a motor vehicle by means of highly or fully automated driving function is permitted if the function is used as intended.
 
(2)
Motor vehicles with highly or fully automated driving function within the meaning of this Act are those which have technical equipment,
 
1.
To control the driving task - including longitudinal and transverse guidance - the respective motor vehicle after activation control (vehicle control),
 
2.
which is able to comply with traffic regulations directed at vehicle guidance during highly or fully automated vehicle control,
 
3.
which can be manually overridden or deactivated by the driver at any time,
 
4.
can recognize the necessity of the vehicle hand control by the driver,
 
5.
the driver can visually, acoustically, tactually or otherwise perceptibly display the requirement of the autograph vehicle control with sufficient reserve of time before the vehicle control is delivered to the driver, and
 
6.
indicates use contrary to one of the system descriptions.
 
The manufacturer of such a motor vehicle must declare in the system description that the vehicle complies with the requirements of sentence 1.
(3)
The preceding paragraphs shall only be applied to vehicles which are approved in accordance with § 1 (1), which comply with the requirements of paragraph 2 sentence 1 and whose highly or fully automated driving functions
 
1.
are described in, and comply with, international regulations applicable in the scope of this Act; or
 
2.
a type-approval pursuant to Article 20 of Directive 2007/46 / EC of the European Parliament and of the Council of 5 September 2007 establishing a framework for the approval of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles (Framework Directive) (OJ L 263, 9.10.2007)
 
(4)
Driver is also the one who activates a highly or fully automated driving function referred to in paragraph 2 and used for vehicle control, even if he does not control the vehicle in the context of the intended use of this function.
 
 
30
See Greger (2018), p. 1.
 
31
Channon (2016), p. 33. He underlines regarding EU law that: “It is submitted that an overall EU wide approach is needed for autonomous vehicles and this should be considered as soon as possible. The Motor Insurance Directives have sought to remove any barriers to trade by harmonizing key aspects of the law of Motor Insurance to protect free movement. Differing laws on autonomous insurance and liability will almost certainly constitute a significant barrier to movement as Member States will almost certainly introduce differing laws and regulations and will almost certainly answer the above questions in relation to liability in different ways”.
See also Merkin et al. (2017).
 
32
See par. 3. In this term it is useful the last chapter of AI HLEG’s guidelines (https://​ec.​europa.​eu/​digital-single-market/​en/​news/​draft-ethics-guidelines-trustworthy-ai) ordered to operationalise the implementation and assessment of the requirements of Trustworthy AI set out above, throughout the different stages of AI development and use. The assessment should circular “ where the assessment is continuous and no step is conclusive (cfr. Figure 3 above). It will include specific metrics, and for each metric key questions and actions to assure Trustworthy AI will be identified. These metrics are subsequently used to conduct an evaluation in every step of the AI process: from the data gathering, the initial design phase, throughout its development and the training or implementation of the AI system, to its deployment and usage in practice. This is however not a strict, delineated and execute-once-only process: continuous testing, validation, evaluation and justification is needed to improve and (re-)build the AI system according to the assessment”.
With regard to the “method of building the algorithmic system:
  • In case of a rule-based AI system, the method of programming the AI system should be clarified (i.e. how they build their model)
  • In case of a learning-based AI system, the method of training the algorithm should be clarified. This requires information on the data used for this purpose, including: how the data used was gathered; how the data used was selected (for example if any inclusion or exclusion criteria applied); and was personal data used as an input to train the algorithm? Please specify what types of personal data were used.
Method of testing the algorithmic system:
  • In case of a rule-based AI system, the scenario-selection or test cases used in order to test and validate their system should be provided
  • In case of a learning based model, information about the data used to test the system should be provided, including: how the data used was gathered; how the data used was selected; and was personal data used as an input to train the algorithm? Please specify what types of personal data were used.
Outcomes of the algorithmic system
  • The outcome(s) of or decision(s) taken by the algorithm should be provided”.
 
Literature
go back to reference Bauman Z (2006) Liquid times: living in an age of uncertainty. Polity, Cambridge Bauman Z (2006) Liquid times: living in an age of uncertainty. Polity, Cambridge
go back to reference Beauchamp TL (2001) Childress JF. Principles of biomedical ethics, 5th edn. Oxford University Press, Oxford Beauchamp TL (2001) Childress JF. Principles of biomedical ethics, 5th edn. Oxford University Press, Oxford
go back to reference Billings CE (1997) Aviation automation: the search for a human-centered approach. Lawrence Erlbaum Associates Publishers, Mahwah Billings CE (1997) Aviation automation: the search for a human-centered approach. Lawrence Erlbaum Associates Publishers, Mahwah
go back to reference Bishop CM (2006) Pattern recognition and machine learning. Springer, Berlin Bishop CM (2006) Pattern recognition and machine learning. Springer, Berlin
go back to reference Blaschczok A (1998) Gefährdungshaftung und Risikozuweisung. Heymanns, Cologne Blaschczok A (1998) Gefährdungshaftung und Risikozuweisung. Heymanns, Cologne
go back to reference Borges G (2018) Rechtliche Rahmenbedingungen für autonome Systeme. NJW 71(14):977 ff Borges G (2018) Rechtliche Rahmenbedingungen für autonome Systeme. NJW 71(14):977 ff
go back to reference Channon M (2016) Autonomous vehicles and legal effects: some considerations on liability issues. DIMAF 1:33 Channon M (2016) Autonomous vehicles and legal effects: some considerations on liability issues. DIMAF 1:33
go back to reference Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena EJM (2018) AI4People —an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach 28(4):689–707CrossRef Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena EJM (2018) AI4People —an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds Mach 28(4):689–707CrossRef
go back to reference Greger R (2018) Haftungsfragen beim automatisierten Fahren. Zum Arbeitskreis II des Verkehrsgerichtstags. NVZ 33:1 Greger R (2018) Haftungsfragen beim automatisierten Fahren. Zum Arbeitskreis II des Verkehrsgerichtstags. NVZ 33:1
go back to reference Jansen N (2003) Die Struktur des Haf-tungsrechts. Mohr Siebeck, Heidelberg Jansen N (2003) Die Struktur des Haf-tungsrechts. Mohr Siebeck, Heidelberg
go back to reference Koza JR, Bennett FH, Andre D, Keane MA (1996) Automated design of both the topology and sizing of analog electrical circuits using genetic programming. In: Artificial Intelligence in Design’96. Springer, Berlin, pp 151–170 Koza JR, Bennett FH, Andre D, Keane MA (1996) Automated design of both the topology and sizing of analog electrical circuits using genetic programming. In: Artificial Intelligence in Design’96. Springer, Berlin, pp 151–170
go back to reference Mitchell T (1997) Machine learning. McGraw Hill, New York, p 2 Mitchell T (1997) Machine learning. McGraw Hill, New York, p 2
go back to reference Murphy KP (2012) Machine learning a probabilistic perspective. The MIT Press, Cambridge Murphy KP (2012) Machine learning a probabilistic perspective. The MIT Press, Cambridge
go back to reference Naylor M (2017) Insurance transformed. Technological disruption. Palgrave, Basingstoke, pp 175–185 Naylor M (2017) Insurance transformed. Technological disruption. Palgrave, Basingstoke, pp 175–185
go back to reference Nof SY (2009) Automation: what it means to us around the world. In: Nof SY (ed) Springer hand-book of automation. Springer, Berlin, pp 13–52CrossRef Nof SY (2009) Automation: what it means to us around the world. In: Nof SY (ed) Springer hand-book of automation. Springer, Berlin, pp 13–52CrossRef
go back to reference Samuel AL (1959) Some studies in machine learning using the game of checkers. IBM J Res Dev 44:1.2 Samuel AL (1959) Some studies in machine learning using the game of checkers. IBM J Res Dev 44:1.2
go back to reference Simon HA (1979) Models of thought. Yale University Press, New Haven Simon HA (1979) Models of thought. Yale University Press, New Haven
go back to reference Smith BW (2013) SAE levels of driving automation Smith BW (2013) SAE levels of driving automation
go back to reference Teubner G (2018) Digitale Rechtssubjekte? Zum privatrechtlichen Status automoner Softwareagenten. AcP 54:155–205CrossRef Teubner G (2018) Digitale Rechtssubjekte? Zum privatrechtlichen Status automoner Softwareagenten. AcP 54:155–205CrossRef
go back to reference Teubner G (2019) In: Femia P (ed) Soggetti giuridici digitali? Sullo status privatistico degli agenti software antonomi. Edizioni Scientifiche Italiane, Naples Teubner G (2019) In: Femia P (ed) Soggetti giuridici digitali? Sullo status privatistico degli agenti software antonomi. Edizioni Scientifiche Italiane, Naples
go back to reference Weinrib EJ (1987) causation and wrongdoing. Chicago-Kent Law Rev 63:407 ff Weinrib EJ (1987) causation and wrongdoing. Chicago-Kent Law Rev 63:407 ff
Metadata
Title
Ethical Issues, Cybersecurity and Automated Vehicles
Author
Sara Landini
Copyright Year
2020
Publisher
Springer International Publishing
DOI
https://doi.org/10.1007/978-3-030-27386-6_14