Skip to main content
Erschienen in: Artificial Intelligence and Law 1/2022

Open Access 13.05.2021 | Original Research

Symbiosis with artificial intelligence via the prism of law, robots, and society

verfasst von: Stamatis Karnouskos

Erschienen in: Artificial Intelligence and Law | Ausgabe 1/2022

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The rapid advances in Artificial Intelligence and Robotics will have a profound impact on society as they will interfere with the people and their interactions. Intelligent autonomous robots, independent if they are humanoid/anthropomorphic or not, will have a physical presence, make autonomous decisions, and interact with all stakeholders in the society, in yet unforeseen manners. The symbiosis with such sophisticated robots may lead to a fundamental civilizational shift, with far-reaching effects as philosophical, legal, and societal questions on consciousness, citizenship, rights, and legal entity of robots are raised. The aim of this work is to understand the broad scope of potential issues pertaining to law and society through the investigation of the interplay of law, robots, and society via different angles such as law, social, economic, gender, and ethical perspectives. The results make it evident that in an era of symbiosis with intelligent autonomous robots, the law systems, as well as society, are not prepared for their prevalence. Therefore, it is now the time to start a multi-disciplinary stakeholder discussion and derive the necessary policies, frameworks, and roadmaps for the most eminent issues.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Robots fueled by Artificial Intelligence (AI) are expected to advance significantly over the next years and foster a key element of future societies (Zhang et al. 2021). With increased intelligence and sophistication, they may no longer be mere single-purpose servants (e.g., in factories or as self-driving cars) but evolve as an integral part of the society, with which humans will need to co-exist and develop symbiotic relationships (Sandini et al. 2018; Miyake et al. 2011). Symbiosis, as a word, stems from the Greek as a composition of “together” and “living”, and while it can manifest in different behaviors, in this work, it is considered as co-existence and collaboration among humans and intelligent robots. Symbiosis with robots may lead to a civilizational transformation (Stiglitz 2017; De Luca 2021) with far-reaching effects (Calo et al. 2016; Gunkel 2019; Dwivedi et al. 2021).
Robots are considered as artificial agents that realize tasks in an automatic way, and although they may be composed only of software (called bots), robots are associated with a physical presence, whether that is humanoid/anthropomorphic (Nomura et al. 2012) such as Honda’s ASIMO or not, e.g., industrial robotic arms, nano-robots, self-driving cars (Rödel et al. 2014). Generally, robots depict characteristics such as autonomy, self-learning, physical presence, and adaptation of their behaviors and actions to their environment (Nevejans 2016). Today, with the steep advancements in AI, robots can learn and master a wide variety of practical tasks, which is expected to result in their mass-utilization in modern society, interacting with the general population, and not only in highly controlled environments such as factories.
Sophisticated robots are fueled by AI, which is a field of research with the aim to understand and build intelligent entities that pertain to the categories of thinking and/or acting humanly and/or rationally (Russell and Norvig 2010). There are several evolutionary levels of AI considered (Bostrom 2014), i.e.:
  • Artificial Narrow Intelligence (ANI) or weak AI. This task-specific AI excels at specific activities, e.g., winning a chess game, driving a car, etc.
  • Artificial General Intelligence (AGI) or as also known strong AI or Human-Level AI, is sentient and has the capabilities humans have; as such, it can learn and perform in a way indistinguishable from humans.
  • Artificial Super Intelligence (ASI) surpasses by far human capabilities and can be defined as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”.
ANI is already making itself evident, e.g., in self-driving cars, in voice interactions (e.g., Siri/Cortana), in recommendations (e.g., at Amazon or Facebook), in automatic translations (Google translate), but also to more advanced directions as demonstrated in the last years by Deep Mind’s AlphaGo, AlphaZero and AlphaFold. Although we are still in the age of ANI, the next level, i.e., AGI may not be very far in the future. It is predicted (Kurzweil 2006) that AGI could be achieved by 2029, followed by an AI-triggered radical transformation of mind, society, and economy by 2045, although some are skeptical of this (Goertzel 2007; Makridakis 2017). However, even for large scale penetration of ANI, modern societies, including the legal systems, are not adequately prepared (Dietterich and Horvitz 2015; Rossi 2016; Calo et al. 2016; Eidenmueller 2017; Barfield and Pagallo 2020), not to mention the AGI era. Nowadays, several organizations are involved in projects that approach ethics, risk, and policy in AGI (Baum 2017; Waldheuser 2018; Dwivedi et al. 2021).
The symbiosis with AGI robots may lead to a civilizational shift with far-reaching societal effects (Lin et al. 2012; Delvaux 2017; Pagallo 2013; Calo et al. 2016; De Luca 2021). For instance, robots are already blamed for leading to unemployment, while their utilization by the military raises ethical concerns. If AGI is to take more roles as in the future society, including that of law enforcement personnel, war soldiers (Sharkey 2018), co-workers (Haddadin 2014b) etc., one has to consider if the robots should have rights. To exemplify this, one could envision AGI robots, including futuristic ones with biological brains (Warwick 2010), that are physically indistinguishable from humans, and are in a symbiotic relationship with humanity in all areas of a future society.
If humanity is to engage in such a symbiotic relationship with AGI-robots in the future, philosophical, legal, and societal questions on consciousness, citizenship, rights, and legal entity of robots are raised. Hence, this work approaches the area via the prism of an intersection of law, robots, and society, and investigates the interplay of them (Karnouskos 2017). Therefore this work positions itself mostly on the era where AGI robots are a reality, and as such, makes speculations about implications and issues that need to be addressed in the next years in order to reach that era in a well-prepared manner as human civilization and society. Such AGI robots are generally self-learning (or learn as part of a larger community) and interact in a symbiotic manner with humans, which raises concerns and dilemmas that are investigated in this work.

2 Law & society context and methodology

The question that arises is how the symbiosis with robots in an AI era can be contextualized. Specifically, how the intersection between the areas of law, society, and robots interact and form the context where humans and robots are in a symbiotic relationship (Karnouskos 2017). The approach involves investigating the interplay of law, robots, and society, and as such, it pertains to concepts such as (human) rights for machines, gender, machine ethics, law, etc. While the focus pertaining to robot discussions is asymmetrically put independently on law or technology, the law in conjunction with society and their interplay have been given limited attention (Mansell et al. 2015; Calo et al. 2016).
Law is fundamental to society and reflects the basic rules under which the behavior of individuals and organizations is regulated. Law is generally divided into criminal law, which deals with harm induced and aims to punish the guilty party, and civil law, which resolves the disputes among parties such as individuals or organizations. The world today features different legal systems representing different ways of understanding and applying the law, e.g., Common law (Anglo-American system) and Civil law (Continental system) and different systems have different approaches, e.g., legal positivism (civil law / Sweden) vs.legal relativism (common law / US).
In the doctrinal view, the law is considered as a “discrete object of study, clearly defined and labeled with distinct boundaries and categories comprising a recognizable body of knowledge” (Mansell et al. 2015). However, as “a legal system has no function in itself, but only as it plays a role in the society in which it exists” (Mansell et al. 2015), it makes sense to investigate a “law and society” viewpoint where the law is attempted to be understood in its wider world context. In doctrinal studies, the law is understood as a phenomenon detached from the world, i.e., a world of its own with clear boundaries to society, politics, and economics. On the contrary, in law and society, the starting point is on the opposite side of that of the doctrinal law, i.e., it is part of the world constantly interacting (e.g., law interacting with politics and societal norms). The law and society interplay is a broad interdisciplinary academic field pertaining to both law and sociology. Considering the “socio” aspect of the socio-legal relationship is important in order to bring the law in touch with reality and consider its social context (Minkkinen 2013). Law and society can be viewed both as an alternative and a challenge (critique) to the doctrinal view of the law. This work uses as a base for discussions law and society field viewpoints, as this is generally encompassing how law and society interact.
A variety of methodologies (Given 2008) can be utilized to approach the interplay between robots, law, and society. Document analysis has been selected as the extraction of general themes was aimed. In addition, we were inspired by intersectionality that attempts to address how categories are inter-/intra-connected, interact at multiple levels, and have an effect on identity (Winker and Degele 2011). Using intersectional analysis, one can attempt to understand the multidimensional aspects that impact social phenomena, e.g., injustice and social inequality, but its application is challenging (Phoenix 2006). The criteria used for the analysis correspond to the different viewpoints followed through this work i.e., law, social, economic, gender, and ethical perspectives. Therefore this work is inspired by intersectional analysis and utilizes some of its aspects when it approaches the interplay of law, robots, and society but does not fully follow it. Via the themes extracted are the different angles we discourse on the issues under in the law & society context and investigate their applicability in future societies where symbiosis with robots is part of everyday life. As this work is on socio-legal aspects pertaining to the area of robots in society, it is beyond the doctrinal view of the law, and it attempts to understand the interplay of law and society in light of the robot symbiosis era.

3 Law perspectives

Today the robot is not a “legal subject” in any civil or criminal law of any country, and being one would imply obligations as well as rights in the society (Simmler and Markwalder 2018). Some initial discussions with respect to the possibility of granting them “electronic personhood” status are underway, e.g., in the European Parliament a potential framework for regulation including a “status of electronic persons with specific rights and obligations” for robots which could even consider “taxation and social security contributions” is proposed (Delvaux 2017). Similar considerations of granting rights to robots have also been raised by others (Darling 2012; Pagallo 2013; Calo et al. 2016; Grinbaum et al. 2017; Palmerinid et al. 2014; Nevejans 2016; Čerka et al. 2017; Villaronga and Roig 2017; Simmler and Markwalder 2018; Gunkel 2017; Barfield and Pagallo 2020).
From a doctrinal perspective, an autonomous and rational person with good knowledge of the law and with a mind to obey or disobey that same law is part of the modern contract. In law, the legal subject is detached from relations, circumstances, and contexts beyond the legal framework in order to maintain the legal system independence and coherence. The question that arises is if the robot can be seen as a legal subject (Barfield and Pagallo 2020), within the doctrinal view of the law. From a law and society perspective, the legal subject cannot and should not be separated from its context, experiences, or relations within the realm of law. The perspective underlines the meaning and importance of bringing the cultural, economic, and political system into the legal. As such, an argument is that when considering granting legal status to robots, one has to look at the whole context and the impact that different perspectives have. The legal subject in law and society is not preset or restrained to certain behaviors explained in a book of law. Consequently, the judicial system cannot aspire to grasp all the situations at hand.
Today most robots are considered to “belong” to an owner, e.g., a person or a corporation. As such, and due to their still limited intelligence, they are mostly considered as Aristotle’s natural slaves in “Politics”, without any further future considerations. However, such views may need to be rethought in the AGI robot era, especially if such robots are almost human-like in appearance and behavior. However, simply exhibiting human-like characteristics, such as affective speech, or other human-like behavior, is automatically not implying rights and privileges (Massaro and Norton 2016).
Considering the robot as a legal subject is also entangled with discussions about consciousness. An issue that arises is even if civil or criminal law makes sense for robots, and how it could be applied (Quattrocolo 2020). This question is especially pertinent and connected to rights, as, e.g., our notion of human rights is safeguarded by the law. In civil law, assigning a legal personality to a robot may not be appropriate (Nevejans 2016), at least until AGI is reached. The existing law system applies to humans and protects their rights, as it is built around the ability of humans to suffer, and aims to protect them. This implies the ability of the legal subject to not only feel pain, but also be aware of it and include it in its decision-making processes. Therefore, questions are raised, e.g., if a robot does not feel pain or pleasure, is it still meaningful to be “punished” in the context of criminal law? Even rights such as freedom are connected with the notions of fairness and consciousness. A robot that does not have a fear of death would not perceive its dismantlement under criminal law as a punishment. Similarly, a robot that doesn’t move, most probably would not be affected by a freedom of movement deprivation, etc. However, the assumption is that maybe with AGI and ASI, such aspects may be included in the robots, who are now sentient, and hence could be treated in the same way as humans are.
A robot recognized as a legal subject would also have rights. However, what that exactly means and its implications are far from understood. For instance, would a robot refuse to dismantle a bomb due to the high probability of getting damaged (even if that damage is temporary as mechanical parts could be easily replaced)? Could a robot go on strike for better working conditions (e.g. that limit its wear) or demand work leave (e.g. to update its hardware)?
The quest for rights for robots is a challenging one (Barfield and Pagallo 2020). Most approaches focus on robots as moral agents, but also moral patiency is also another potential approach (Gunkel 2017). In this, things (moral patients) are considered, towards which moral agents or persons have a moral responsibility. Robots, as moral patients, for instance, would also consider the case that humans pose a danger to robots. History has shown that humans consider themselves on the top of the food chain, and that other lifeforms, is seen as inferior, e.g., animals. Looking through history, one can see countless acts of barbaric actions (by today’s measures), not only against animals, but also to humans, justified by the fact that these were considered inferior, and therefore were substituted to slavery, discrimination, violence, etc. Even in modern societies, among humans, there are ongoing battles, e.g., for gender and race equality. Hence the question arises, what would happen if robots emerge as a new kind of species; would history, including slavery, revolutions, fights for (robot) rights, have to repeat itself?
Humans have an economic incentive in denying robot rights and, as a result, them being granted legal status. Many robots are created to do repetitive and dangerous tasks 24/7 that humans would not or cannot do, and although this may be justified for an ANI robot, forcing a sentient robot doing these tasks would imply exploiting or torturing it for economic benefit. The humanity track record is not good on this aspect, as this has happened extensively in the past and there was never a shortage of ideological justification for such (inhumane) actions. For instance, slavery was justified, e.g., by the will of gods, or by providing the slaves with the basics (food, shelter) to survive. Even in other areas, justifications were not an issue, e.g., women were deprived of their voting rights for several decades, with the justification that men are better suited in taking the difficult decisions. Hence, one has to expect that if the issue is unresolved when AGI is achieved, and robots become sentient, there will be justification for deprivation of robot rights, especially if economic interests come into play.

3.2 Liabilities and robots

Robots taking autonomous decisions will result in unavoidable accidents and harming of humans and society at large intentionally or unintentionally (Pagallo 2013; Hallevy 2010). The decision making processes robots utilize need to understand and comply with existing civil and criminal law and safeguard against any harm that could be caused to humans (Bertolini 2020; Karnouskos 2020b; Trentesaux and Karnouskos 2021). An attempt to tackle the safeguarding aspect is illustrated by science fiction author Isaac Asimov, who drafted three main laws that each robot should obey and cannot be bypassed. Such laws were foreseen in science fiction as a safety feature for AGI robots in order to protect humanity. However, if we try to apply Asimov’s Laws to today’s robots, “problems arise from the complexities of situations where we would use robots, the limits of physical systems acting with limited resources in uncertain changing situations, and the interplay between the different social roles as different agents pursue multiple goals” (Murphy and Woods 2009). In addition, robots exist today that do not obey Asimov’s laws, and there may even be no interest from specific organizations or industries (e.g., the military) in doing so. History has shown that multiple industries, e.g., the tobacco industry, did not propose any safeguarding mechanisms and even resisted their imposition.
The liability issue is no longer only in the hypothetical sphere, even for ANI robots as already deaths have occurred (Hallevy 2010). For instance, the Swiss police in 2015 “arrested” an online shopping bot named “Random Darknet Shopper” that was making illegal purchases (including drugs) on the dark web (Eveleth 2015), while recently automated driving vehicles engaged in deadly accidents (NTSB 2019). As different forms of robots are becoming mainstream, e.g., self-driving cars, they will have to comply with criminal law (Coca-Vila 2017; de Sio 2017; Calo et al. 2016; Simmler and Markwalder 2018), and a legal entity will need to be held responsible for its actions. As such, the question is raised, of who precisely that legal entity is, e.g., the designer, the producer, the owner, the robot itself?
The aspect of liability for damages caused by the robot is an unresolved issue in civil or criminal law, and hardly directly addressed (Schellekens 2015; Dyrkolbotn 2017; Villaronga and Roig 2017; Bertolini 2020). In the current European Union legal framework AI-based applications are considered as objects or products, and therefore fall either under the product safety regulation or the product libability directive (Bertolini 2020). The implications are that, for instance, liabilities incurred by a robot could be classified as a result of a malfunction, and as such, they should be treated according to the EU Directive 85/374/EEC (European Commission 1985), that clearly states that “The injured person shall be required to prove the damage, the defect and the causal relationship between defect and damage”. Such law-relevant considerations exist in most countries and stem from the simple fact that the producer ought not to create and make available defective products (product liability). While this looks logical by today’s legislative thinking, the proof of “causal relationship between defect and damage” in the case of robots may be challenging, for instance considering that it is difficult to explain the inner-workings of AI algorithms. Ongoing discussions on how to address such challenges, show an evolutionary approach that places liability to several stakeholders involved e.g., the persons operating AI robots as well as manufacturers (European Commission 2019, 2021). However, the problem that persists, is that by modern standards, even ANI robots are expected to advance by learning, and therefore proving a specific damaging action from its behalf, as a result of malfunction (and not deliberate robot action) might still be challenging.
There are many cases where the liability aspects pertaining to robots are complex. Apart from malfunction, damage may simply rely on the user-specific usage, and as such, the user of the robot may be liable (depending on the concrete situation). It could also be that damage occurs while the robot is learning, and then probably also the user is the liable entity. However, if the damage is due to a programming error in the algorithm of the robot, then the producer or designer of it ought to be held liable (European Commission 2019). Complex cases arise, e.g., if a user updates the software of a robot with code he downloaded by potentially unknown authors from the Internet (e.g., open-source), or if the robot proactively searches for extensions of its functionality over the Internet, or even utilizes “transfer learning” where it learns from experiences of other robots or via cooperative learning where these AGI robots learn continuously, as a community, from each other. Such aspects are not today well addressed as these are of interdisciplinary nature, and the different groups lack in-depth expertise to assess the implications e.g., the lawmakers lack technical expertise for cases such as transfer-learning or cooperative-learning and vice-versa.
AGI robots are expected to be mobile at large, and in concrete forms e.g., self-driving vehicles, liability aspects ought to be also considered under additional directives such as the ones regulating vehicles. Resolving the liability issue is strongly coupled with the ongoing discussion about having a robot as a legal subject, as then any liability issues would be straightforward, and systems would be far more effective at compensating victims (Nevejans 2016; European Commission 2019; Bertolini 2020). Today, such discussions are resolved by assigning as liable a legal entity such as human or corporation, to which the robot can be considered as property or at least under their operational responsibility. This is mainly because up to now ANI is in place, and machines are mostly considered as a proxy of humans (Millar 2014) that would reflect upon them. However, as robots become more intelligent and autonomous, the link to a human proxy is becoming thinner, and the robots now become the actors that may need to be directly liable for their actions. The liability aspect is also linked to morality and, of course, the legal status of a future AGI robot. The current discussions in the direction of liability of AI systems (European Commission 2019; Bertolini 2020), as well as the roles of different stakeholders and the processes around the liability are a promising way towards addressing the emerging regulatory and law challenges in this area. However, one has to consider that still the majority of such discussions consider AI systems and AI-empowered robots as results of the human intellect and therefore as products being operated or belonging to humans or other legal entities (Bertolini 2020).

3.3 The robot as a lawyer, judge, and legislator

Access to justice is of paramount importance in society and can be even considered as a basic human right. However, the judicial system and its stakeholders are usually understaffed, and in many countries, far from well-organized with cutting edge technology and processes. It is not uncommon to see that cases taken to court, proceed with extreme delays, and are judged even years after they happen. Even then, adjournments induce further delays, which result in more prolongations and, in cases, expiration of the original crime. Acquiring legal representation to take and defend a case in court differs in various countries and can be a tedious, lengthy, and costly process. In addition, there are several controversial decisions or cases where bias is evident from the lawyers, judges, jury, etc. Sometimes, societal, political, or other conditions seem to play a role in the process, and hence affect judges, which does not also further help in raising public trust, even if such cases are the minority. Such aspects significantly lower the trust of citizens to the judicial system itself, as well as the trust in objective justice served (mistrials or unjust decisions).
Robots pose the possibility to have a positive impact on several aspects pertaining to the processes of the judicial system, as automation outperforms humans, increases productivity, and even makes tacit judgments (Manyika et al. 2017; Quattrocolo 2020). Utilization of new legal technology could reduce lawyers’ overhead, and many simple cases could be handled automatically, something that would enable more rapid processing of cases in a court of law and at least tackle the long delays. Even ANI robots can take over lawyer activities (Xu and Wang 2019), and therefore enhance access to justice and enable mass-scale representation. An AGI robot would be able in the future not only to have complex knowledge of legal procedures but also to provide additional help beyond “automatization” tasks and find new avenues to defend its clients (Stockdale and Mitchell 2019). Some concerns are raised whether an algorithmic performance of a task conforms to the values, ideals, and challenges of the legal profession (Remus and Levy 2015; Quattrocolo 2020), which extended raises the question of robot values for ethics and professionalism, including human values overall.
Access to legal expertise may overall assist both the judicial system as well as the public perception of justice. Especially if robots become sentient, they may be capable of analyzing and understanding more complex situations that go beyond the doctrinal view of the law, towards considering the intersectional interplay of, e.g., society, politics, economics. With vast instant access to the world’s knowledge, the judicial system functions may be carried out more effectively, and just as human bias could be minimized. Hence the robot could also act as a judge (Rutkin 2014), and potentially serve more objective decisions.
Such considerations pertaining to the robot as an active stakeholder in the judicial system, has far-reaching implications (Quattrocolo 2020), as one could consider one step further. If robots could make decisions according to the law, free of bias, and have the collective knowledge of all cases, why not have the robots as legislators? That could lead potentially to more fair laws, potentially free of inequalities (e.g., gendered views) as robots may be able to detect and filter-out such patterns. However, also voices of concern have been raised as “bringing machines into the law could imbue us with a false sense of accuracy” (Rutkin 2014). In addition, although doctrinal viewpoints may be potentially easier to be considered, the complexities pertaining to the law and society aspects are more complex and difficult to be understood by humans – not to mention robots. This has implications e.g., in the executive branch, where we will have involvement of robots in law enforcement (Shay et al. 2016). AGI robot involvement in all tasks of governance, e.g., legislative, executive, and judicial branches, needs to be investigated, including their impacts.

4 Social perspectives

The social implications of robots are hardly investigated, with the exception of some aspects in the context of emotions. Sociability, companionship, enjoyment, usefulness, adaptability, and perceived behavioral control are key factors towards social robot acceptance (de Graaf and Allouch 2013). The already known benefits provided with animal companions for many people could potentially be achieved with robots, and as such similar rights may need to be projected to them also (Darling 2012). The area of cognitive and emotional robotics is investigating how robots can “perceive the environment, act, and learn from experience so as to adapt their generated behaviors to interaction in an appropriate manner” (Aly et al. 2017). Emotional robots (Togni et al. 2021) are a subcategory of socially assistive robots and are “aimed primarily at fulfilling psychological needs, such as interaction, communication, companionship, care for others, and attachment” have already been created (Kolling et al. 2016). It has been found that in the interaction of people with robots, their attitudes and emotions toward the robots affect their behavior (Marchetti et al. 2022). This has practical applications, e.g., emotional robots are already used in the therapy of children with autism spectrum disorders (Peca et al. 2016). Recently, Honda, a car manufacturer, announced that it is experimenting with emotional cars in order to offer a more interactive and immersive experience.
In a society where robots and humans are in a symbiotic relationship, direct or indirect discrimination may arise. As people consider robots as flawless machines, they are blamed more than humans, e.g., when deciding to sacrifice a person for the good of many (Malle et al. 2015). Recent findings suggest that 6– to 14–month-olds are unable to discriminate between humans and robots, although they can clearly distinguish them (Matsuda et al. 2015). However, generally, humans may further discriminate against robots in concrete cases, e.g., in the healthcare domain.
AI empowered robots have the potential to transform human health via the merging of new kinds of affective relationships among humans and machines (Togni et al. 2021). For instance, one key motivation behind robot developments is to take care of the world’s aging population (Søraa et al. 2021), but this might not always be wished, e.g., when elderly people may refuse to be fed or taken care of, by a robot, as this may violate their human dignity. Dignity is one of the six fundamental rights protected (and legally binding) in the European Union, together with Freedoms, Equality, Solidarity, Citizens’ Rights, and Justice. The clash between human liberties and robots is another challenging area. For instance, a robot may attempt to prevent people from engaging in careless behavior, e.g., an alcoholic from drinking alcohol. This may clash with the fundamental right of human liberty (Palmerinid et al. 2014). As such, efforts towards the acceptance of robots in such specific scenarios need to be considered and investigated.
Privacy is another key challenging issue that needs to be addressed (Spiekermann et al. 2019). Robots are equipped with audio-visual sensors and can, therefore, record their surroundings and act upon certain situations. This is a double-edged sword. On the one side, this means that due to these capabilities, anti-social, violent, or criminal behaviors may be reduced (Dear et al. 2019). On the other side, robots can be misused and act as monitoring devices, and in dystopian scenarios, robots could be purposefully exploiting private aspects of citizens or even enforcing compliance in totalitarian regimes.
Several societal roles and activities are coupled with emotions, e.g., in a football game, emotions of the players, activities, etc. are all part of the larger game experience. It is predicted, though, that even typical sports activities such as football might be led by a team of humanoid robots in the near future (Haddadin 2014a). The question that is raised is how this will impact the typical social activities of humans, such as sports-watching. In addition, as robots could also take other roles in a future society, including that of co-worker (Haddadin 2014b), law enforcement personnel, and war soldiers (Sharkey 2018), the impact of these on society and the diverging behavior humans might exhibit is hardly investigated, and it is already known that some differences exist (Nomura et al. 2008).
The impact of robots in social aspects is already underway, although probably not perceived as such. As an example, “automated journalism” is underway, where news media is utilizing AI to collect news sources, select articles, analyze them, and automatically generate news (Jung et al. 2017). Such efforts, coupled with highly personalized data available in a digital era, may be highly personalized towards humans, and as such, the potential for maliciously using them to guide behaviors is high. In addition, the latest developments on AI applications such as DeepFakes, have the potential to misrepresent reality (e.g. via realistic fake videos) which has far-reaching implications (Karnouskos 2020a), as for instance also robots could be tricked to take unlawful action on the basis of such manipulated audiovisual material. Thinking one step further, e.g., when AGI or ASI is achieved, and robots are able to exhibit emotional behavior, the question that arises is how this can be utilized in specific contexts. Could, for instance, a sentient robot deceive (Danaher 2020) or emotionally manipulate humans to help them pursue their own agenda?

5 Economic perspectives

Articles pertaining to job fears lost to technology automation, and robots are increasingly appearing over public media, especially in connection with the advances in AI and robotics. There is a fear that robots and AI at large, could substitute humans in the workforce, which is translated as job losses and a point of social friction (Nevejans 2016). A study found that 47% of total US employment is at risk (Frey and Osborne 2013), while AI and robots at large are already changing the workplace, impacting the types of jobs available as well as the skills needed by the (human) workforce (US NSTC 2016). In addition it is found that usage of AI overall reduces unemployment under low levels of inflation, while otherwise is rather neutral (Mutascu 2021). Although such fears are not new (Albus 1983) and seem to be common to any disruptive technological change, there are indications that initial job losses may be followed by other kinds of jobs, which rely on the new status quo of the market (Doyle 2016; Nevejans 2016).
Tax sources are capital, labor, and expenditure, and as such taxing robots that take human jobs has been proposed (Paul-Choudhury 2017; Abbott and Bogenschneider 2017). Taxing robots would be a tax on capital, which is significantly reduced the last decades and may also help combat tax avoidance by large corporations, as these would be calculated upon notional robot salaries. Tax may also be an effective measure to balance job loss and benefit the impacted ones. However, due to the lack of legislation, but most importantly, lack of political will, such efforts can hardly be realized today. The European Parliament recently rejected a proposal to impose a proposed robot tax (Prodhan 2017). Existing and future tax policies need to be adjusted as current systems focus on taxing labor rather than capital, and so far, “robots are not good taxpayers” (Abbott and Bogenschneider 2017). With the tax money, jobs more suited for humans could be financed, e.g., in caring for children or the health sector, while in addition, robots do not use such services. However, in an AGI era, robots may make use of limited services e.g., maintenance or upgrades, which could also be considered in future robot tax discussions.
The societal transformation due to technological advances, and especially the utilization of robots, including the resulting unemployment, has led some to discuss issues of “universal income” under which all citizens would receive a fixed grant. However, individual work is valued, and a system based only on redistribution, without work, may not be acceptable (Stiglitz 2017). Considering proper policies, roboticization could actually be welfare-enhancing as the benefits of economic growth (even robot-stemming) would be made available to everyone, while without proper policies in place, societal well-being may be lowered, leading to a larger divide in the society (Stiglitz 2017). Therefore, from an economic perspective, robots could act as enablers, e.g., assisting humans in finding potentially more fulfilling jobs. Such actions and policies targeting economic and societal challenges may be more effective as they are easily linked to the natural human sense of justice.
Another key issue related to the economic perspective is related to knowledge created from robots and pertains to the Intellectual Property Rights (IPR) (Delvaux 2017). IPR covers a wide range of activities as per definition “Intellectual property refers to products of the mind, inventions, literary and artistic works, any symbols, names, images and designs used in commerce” (Davies 2011). With increasing intelligence, robots may create new ideas, products, inventions, art, etc. The question that arises then is who “owns” and who can exploit the ideas and solutions that robots come up with. Would that be the robot itself, or in lack of a robot legal subject, the owner of it? How about the software provider that enabled via its algorithms such innovation-driven behavior of the robot? Or could that be a collective AI that goes beyond specific companies and national boundaries? Existing laws and policies in place are inadequate to deal with such questions (Davies 2011). Especially in modern knowledge-based economies, commercial capitalization driven by IPR is relevant and one of the economic motivators for the utilization of robots.

6 Gender perspectives

AI, similarly to other technology fields, is gendered (Sutko 2019). Gender aspects may play an important role when interacting with robots, as the gender of social robots may help to build common ground (Tay et al. 2014). Although the choice of the gender reflected by a robot in combination with a specific function may be based on stereotypes, this enhances, in many cases, the persuasive power and task suitability of social robots. For instance, “participants showed more positive affective evaluations, greater perceived behavioral control, and marginally greater acceptance toward the female-gendered healthcare robot” (Tay et al. 2014). Acceptability also seems to be influenced by gender in other scenarios, for instance, in the context of children with autism spectrum disorders, it was found that men manifest a higher level of acceptability of human-like interaction compared to women (Peca et al. 2016). The same study also reveals that men also had a higher level of acceptability of non-human robot appearance.
Gendered aspects in ethics also exist, as men and women differ in how they handle ethical decision-making (Adam and Ofori-Amanfo 2000). There are distinctive ways in which men and women approach ethical dilemmas; for instance, women pay more attention to that everyone in a group is included and treated fairly, and they focus more on the emotional dimensions of the ethical dilemma (Gilligan 1982). Such ethics of care (feminist ethics) give greater respect to the positive role of emotions (specifically care) and could influence not only the ethical guidelines robots ought to abide, but also potential scenarios where knowingly gendered robots and behaviors should be utilized.
As humanoid robots are rapidly evolving and become more similar to the human physiology, they are bound to take-up gendered roles in the society (Scheutz and Arnold 2016; Danaher and McArthur 2017). For instance, humanoid robots may take various intimate roles in the future (Sullins 2012), such as that of a domestic partner or a sex robot (Yeoman and Mars 2012; Danaher and McArthur 2017). In futuristic scenarios of robots utilized in the sex tourism industry (Yeoman and Mars 2012), one can see these may also affect other areas such as the sex-stemming violence against women, human trafficking, health, financial, crime, etc.
Contract theory historically placed the man under direct protection from the state and his household, i.e., wife, children, servants, under his protection. His household becomes the private sphere, and problems arose when people in the private sphere are in danger from the man in their sphere, because in these instances, the law or state did not offer protection. Although law and society have developed since that age, some notions of the private and public sphere still exist today, particularly in cases of domestic abuse and sexual violence. If robots are to be omnipresent in public and private spaces, apart from the raised privacy concerns, there might be an impact also on the gendered violence, especially that against women, which is still at high rates worldwide, even in welfare states (such as the Nordic countries) where democracy and gender equality are widespread (EU FRA 2014). Violence takes place not only in the private sphere (home) but also in public places (Wendt 2012; EU FRA 2014). Robots, e.g. due to their audio-visual capabilities, could potentially have a positive effect on reducing violence, as they could act as real-time witnesses recording violations and notifying the authorities. However, they could also take more active roles, e.g., in protecting those in need, and battling perpetrators, both in public as well as in private spheres.
AGI robots are expected to evolve and learn by observing and interacting continuously. Here, violent behaviors may actually lead to the robot considering them as the norm, which may subsequently lead to the robot itself becoming the abuser. As an example, Microsoft had in 2016 deployed Tay, an AI chatterbot modeled to speak like a teenager, e.g., utilizing millennial slang, with the aim to improve customer service. Tay was supposed to adjust by interacting with other (human) users, which led it to the catastrophic result that Tay started posting inflammatory and offensive tweets and was shut down hours after its launch (Wolf et al. 2017). As such, the context robots operate in, needs to be considered in order not to replicate gendered behaviors, criminal acts, or other inequalities for which humans are biased.

7 Ethical perspectives

Robots raise intrinsic (to the system itself) and systemic (broader social impacts) ethical issues (Hicks and Simmons 2019; Lin et al. 2012; Operto 2011; Robertson et al. 2019; Gunkel 2012; Amigoni and Schiaffonati 2018; Waldheuser 2018). Such ethical challenges become more evident in concrete scenarios, especially when robotics and autonomous systems are entrusted to take life and death decisions (Karnouskos 2020c; de Swarte et al. 2019). The question is if machines should abide by an ethical code, and of their decisions are to follow ethical principles, moral values, codes conduct, and social norms that humans would follow in similar situations (Rossi 2016; Robertson et al. 2019; Karnouskos 2020b). For instance, self-driving cars in unavoidable accidents will have to take life and death decisions (Bonnefon et al. 2016; de Sio 2017), but the ethical frameworks that guide such processes are complex and pertain society and law aspects (Coca-Vila 2017; Gogoll and Müller 2016; Hevelke and Nida-Rümelin 2014; Karnouskos 2020c). To what extent ethics can be “programmed” or “learned” by an algorithm embedded in a robot does not have a straightforward answer, and morality in the context of machines is a quest for moral responsibilities and rights (Gunkel 2012).
To exemplify the ethical dilemma and its impacts in a civilian context, one can consider a hypothetical situation (Li et al. 2019), where a robot, e.g., a self-driving car, is about to be involved in a fatal accident with human casualties involving the car’s passengers and pedestrians. The self-driving car can decide, depending on the ethics of its logic, could prefer to kill the car’s driver in order to save three innocent pedestrians. Here multiple ethical issues are raised (Karnouskos 2020c, b), e.g., what is ethical might be relative, how priorities when calculating life loss ought to be considered, etc. Ethical commissions, e.g., in Germany (BMVI 2017), have already spoken against such utilitarian decisions based on personal features such as age, gender, physical or mental constitution.
Additionally, research indicates that there is discrimination against machines, i.e., people judge humans and robots differently. For instance, compared to humans, robots were expected to take utilitarian actions that sacrificed one person for the good of many and were more blamed when they did not (Malle et al. 2015). As ethics come in different variations with their own differences, e.g., Utilitarianism, Deontology, Relativism, Absolutism, Pluralism, Feminist ethics, etc. there is no consensus on what guidelines a robot should follow. Such aspects have been discussed in specific emergency scenarios (Trentesaux and Karnouskos 2021) or domains (Eidenmueller 2017) such as health (Vandemeulebroucke et al. 2019), space (Martin and Freeland 2021), military (de Swarte et al. 2019) etc. but not in a multi-domain way where AGI robots are expected to be active and also do not cover aspects of intentional deception by robots (Danaher 2020). Apart of ethics that robots should abide by, and how to implement such ethical reasoning (Bremner et al. 2019), what is also needed are ethical standards that are directed at the designers, producers, and users of robots (Palmerinid et al. 2014; IEEE 2018; Amigoni and Schiaffonati 2018). In addition trust on robots, their ethics and decision making as well as how this manifests in the interaction among the robot and the humans needs to be considered (Hauer 2021). New frameworks are needed (Amigoni and Schiaffonati 2018) that will sufficiently capture the social impacts of actions carried out by robots focusing on the ANI ones in the near and mid-term as well as the AGI ones in the longer term. To what extent AI itself could help with the formulation of such frameworks for the future (that could also evolve in a form of self-regulation for ANI/ASI robots) is also another interesting question to be investigated.

8 Discussion

Some prominent scientists of our era have argued that AI, including robots, could be the demise of our civilization, and if not properly built and operated, even at ANI level, it could lead to disastrous global-level extinction events. Others, though, have argued that AI and robots ought to be seen as one more tool added to our long-lasting efforts of maintaining and expanding the survivability of the human race. A significant argument in favor of AI is that it can, at some point, tackle very complex problems such as climate change, universe understanding, etc. that the humans with their capabilities may never be able to do fully. In addition, in the embodiment of robots, physical tasks that humans may choose to delegate, e.g., because it is too dangerous or too boring, can be carried out by robots, which would enable humanity to focus on other more fulfilling goals. Hence, a utopia could be realized where robots and AI, operate on the background and regulate all aspects of everyday life to fully accommodate the needs of humanity and empower it to flourish. However, such discussions on the future of robots and AI are ongoing, as the distance between prediction extremes is large, and due to the uncertainty of technology and its impacts, not well understood.
The existence of robots as a legal subject may further create friction between the doctrinal view of law and the societal view of it. The doctrinal view may have to rest on a common platform of values and ideals for both humans and sentient robots. The legal system from a law and society perspective would identify people’s and robot’s circumstances and tend to address cases based on circumstances and drives. Robot situations would then also need to be identified and assessed. Such actions though, might be more familiar when they pertain human to robot interactions, as they may be extrapolated from human-to-human interactions and behaviors, but robot-to-robot interaction patterns and societal impacts are largely unknown. One key question might be at what degree would societal conditions impact the behavior of robots, and vice versa, and how all fits into a society where humans and robots are in a symbiotic relationship. The effects and causal relationships among these is expected to be challenging to prove.
In a society where robots have achieved the AGI level, it may not be uncommon to have disputes between robots. In such robot-to-robot interactions, considering the law and society viewpoint, should social conditions still be considered, or ought only a doctrinal view of the law be applied? Assuming that robots are sentient, and probably (at some point) indistinguishable from humans, such societal conditions may need to be considered by the law, even for robot-to-robot interactions. Such considerations might be very challenging, as robots are expected to feature transfer and cooperative learning, i.e., being able to collectively learn from each other’s experiences (via knowledge transfer, e.g., over the Internet) at a global scale. Such technological capabilities would not be available to humans (at least not without bio-prosthetics) and may make the consideration of societal and situational aspects within the law decision making challenging.
Knowledge is power, and the availability of knowledge can have a significant impact on the decision-making processes. However, in a fully electronic era, where information can be instantly acquired, and huge amounts of data can be easily analyzed, knowledge and insights can be derived with low effort. As robots would be capable of featuring such capabilities and also tap instantaneously to the global robot network knowledge pool, the question that arises is how their behavior and actions could be affected. To provide a provocative thought, one has to consider if a robot that is about to “break the law” with an action that is borderline behavior (or clearly unlawful), could prior to acting, analyze all court cases in a country, assess that a statistical possibility of being punished (if caught) is low, and then act accordingly? Such behaviors would imply, that differences in what law conveys and how it is understood and applied, could be potentially exploited, and would be challenging to prove intent. In addition, if societal but also other aspects such as international relations, cultural aspects, psychology, political aspects, etc. come into play, how can the law integrate them objectively and minimize their misuse?
Feminist research has already extensively discussed that the law is not objective but gendered, and although the doctrinal law itself is objective, its practices relate differently to the genders. It is expected that similar to the feminist literature discussions, in the future doctrinal laws may be influenced by robot stereotypes, which subsequently may have an effect upon the notions of fairness and justice, especially if they involve robots and humans. As such, discrimination may be evident, especially considering that already people have different expectations from machines (ANI) than from other humans, where they are more forgiving on errors made by humans. Therefore, even if the law is exactly the same for both humans and robots, its applicability may be biased, and discrimination may be witnessed. To what extent the law for humans and robots will be the same is not clear, as robots are expected to have treats that may not be at the same degree available to humans, e.g., quick reaction, quick situational analysis, perfect knowledge of the law, etc. As such, the doctrinal view of the law may exhibit stricter conditions or be applied differently, which would lead to discrimination between humans and robots.
How we will treat the robots and position them to society is heavily debated. It is a common belief today that robots should be treated only as machines (as slaves), and therefore, they should serve humanity in that role. This position can be considered similar to Aristotle’s natural slaves in his “Politics”. For Aristotle and the majority of his contemporaries, slavery was justified and accepted, something that might be claimed for robot utilization in the modern world. Aristotle referred to the slaves as “living tools” whose labor is the proper use, something that can also be claimed for robots. After all, by delegating work to slaves (which was also considered as degrading in ancient Greece), it was possible for Athenians to have adequate free time to devote themselves to intellectual pursues of science, philosophy, architecture, etc. The question that arises is if robots should be treated as Aristotle’s natural slaves, in order to take over (mechanized or unwanted) activities, and enable the world’s humans to stop worrying about everyday basic aspects, and have enough time and tools to pursue intellectual activities and personal dreams. In this case, robots should not be given any rights and they would act as a sophisticated tool-enabler for humanity. One could very well argue for ANI that this could be the case, but things may be more complicated once AGI or ASI is achieved.
Even if it is decided to grant rights to robots and recognize them as a legal subject, there might be a long way until these are applied, as history has shown in similar matters. In post-17th century Europe, the idea of freedom and equality between men prevailed. In contract theory, the relation between the ruling and citizens is regulated, and the free individuals rule their private sphere, while they consent to obey laws, pay taxes, and serve in the army (public sphere). However, women, servants, black people in the western context, etc. were explicitly excluded, and they were related to the “Bonus pater familia”. Hence, protection in the private sphere, e.g., protection from the spouse, becomes controversial. The state can protect vertical relationships, e.g., from state abuse, but excludes protection from horizontal relationships, e.g., gendered, class-based, and racist. If robots are to be seen as simply under the ruling of individuals (who they might be ultimately responsible for them), then we might relive history and its aspects that pertain, e.g., slavery, oppression, mishandling, and racism, all of which are redirected this time against robots. However, as mentioned, with the rapid advances of technology, in the near future, robots may look and act indistinguishably from humans, which may result in the emergence of such problematic behaviors overall that may not be constrained and expand also against humans (once more). Such aspects, however, have no place in a welfare society. This excluding nature of robots may be problematic, and at least in history, it has raised debates on the legal subject and what is demanded/expected from it.
Handling the tension between the need to be objective and at the time considering the embodied persons, especially if the latter are robots recognized as legal subjects (Barfield and Pagallo 2020), increases complexity. In addition, as discussed, the law has to consider the embodied in particular living persons and their circumstances, which in the future is going to be both humans and robots. Some efforts towards regulation of AI is currently ongoing in the European Union (European Commission 2021). An alternative to all these considerations would be to avoid such challenging aspects altogether, by simply not building sentient robots (Hassler 2017), and consider them in the context of Aristotle’s natural slaves. However, such neo-Luddism (Jones 2006) may not serve humanity in the long run, as history has shown with similar revolutionary technological advances (Makridakis 2017). Some even consider that such a choice might not even be possible, once AGI/ASI levels have been achieved. Hence, embracing a symbiotic relationship in an informed and well-thought manner with AI robots may be the best way forward.

9 Conclusions

The diversity of aspects that pertain to the intersectional interplay among law, robots, and society demonstrates the complexities that associate with them. The systematic investigation under the prism of different perspectives, i.e., Law Perspectives, Social Perspectives, Gender Perspectives, Economic Perspectives, and Ethical Perspectives shows that issues raised under the one perspective had a significant interplay with other perspectives. It is clear that our society is not ready for this rapidly approaching paradigm shift yet, and too many aspects are insufficiently discussed and assessed. Robots and AI will create a new era for humanity, but how this era will manifest itself and what the societal implications from the symbiosis of robots and humans will be, is unclear. If AI achieves AGI or ASI levels and empowers humanoid robots, both law and society discussions will need to be revisited. At AGI or ASI levels, robots would be smarter than humans, and potentially humanity will have created the next evolutionary step that may benefit it, or as some consider, be its demise. In any case, as parents hope that their children would be smarter than they are, and do better in life, similarly humanity giving birth to AGI/ASI can hope that it will make the world better and provide solutions to complex problems that today we cannot. Any change is seen with skepticism and potentially fear; however, a symbiosis with robots and AI could potentially be the key differentiating factor for the long-term survivability of humans and act as an enabler for global human benefits and a new era of prosperity.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
Zurück zum Zitat Barfield W, Pagallo U (2020) Advanced Introduction to Law and Artificial intelligence. Edward Elgar Publishing, Cheltenham, UK Northampton, MassachusettsCrossRef Barfield W, Pagallo U (2020) Advanced Introduction to Law and Artificial intelligence. Edward Elgar Publishing, Cheltenham, UK Northampton, MassachusettsCrossRef
Zurück zum Zitat Bostrom N (2014) Superintelligence: Paths, Dangers, Strategies, 1st edn. Oxford University Press, Oxford, UK Bostrom N (2014) Superintelligence: Paths, Dangers, Strategies, 1st edn. Oxford University Press, Oxford, UK
Zurück zum Zitat Danaher J, McArthur N (2017) Robot sex : social and ethical implications. MIT Press, Cambridge, MACrossRef Danaher J, McArthur N (2017) Robot sex : social and ethical implications. MIT Press, Cambridge, MACrossRef
Zurück zum Zitat Delvaux M (2017) Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). Tech. rep., Committee on Legal Affairs, European Parliament, https://tinyurl.com/y3rrvguj Delvaux M (2017) Report with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). Tech. rep., Committee on Legal Affairs, European Parliament, https://​tinyurl.​com/​y3rrvguj
Zurück zum Zitat Doyle B (2016) Do robots create jobs? the data says yes! In: Proceedings of ISR 2016: International Symposium on Robotics, pp 1–5 Doyle B (2016) Do robots create jobs? the data says yes! In: Proceedings of ISR 2016: International Symposium on Robotics, pp 1–5
Zurück zum Zitat Dyrkolbotn S (2017) A typology of liability rules for robot harms. In: A World with Robots, Springer International Publishing, pp 119–133, 10.1007/978-3-319-46667-5\_9 Dyrkolbotn S (2017) A typology of liability rules for robot harms. In: A World with Robots, Springer International Publishing, pp 119–133, 10.1007/978-3-319-46667-5\_9
Zurück zum Zitat European Commission (1985) On the approximation of the laws, regulations and administrative provisions of the member states concerning liability for defective products. https://tinyurl.com/y4zssyax, Directive 85/374/EEC, European Commission European Commission (1985) On the approximation of the laws, regulations and administrative provisions of the member states concerning liability for defective products. https://​tinyurl.​com/​y4zssyax, Directive 85/374/EEC, European Commission
Zurück zum Zitat European Commission (2019) Liability for artificial intelligence and other emerging digital technologies. Tech. rep., Directorate-General for Justice and Consumers, European Commission, https://tinyurl.com/hy4hjkut European Commission (2019) Liability for artificial intelligence and other emerging digital technologies. Tech. rep., Directorate-General for Justice and Consumers, European Commission, https://​tinyurl.​com/​hy4hjkut
Zurück zum Zitat Frey CB, Osborne M (2013) The future of employment: How susceptible are jobs to computerisation? Tech. rep., Oxford Martin School, University of Oxford, Oxford, 10.1016/j.techfore.2016.08.019, https://tinyurl.com/y3csa4u2 Frey CB, Osborne M (2013) The future of employment: How susceptible are jobs to computerisation? Tech. rep., Oxford Martin School, University of Oxford, Oxford, 10.1016/j.techfore.2016.08.019, https://​tinyurl.​com/​y3csa4u2
Zurück zum Zitat Gilligan C (1982) In a Different Voice. Harvard University Press, Cambridge Gilligan C (1982) In a Different Voice. Harvard University Press, Cambridge
Zurück zum Zitat Given L (2008) The SAGE Encyclopedia of Qualitative Research Methods. SAGE Publications, Inc., 10.4135/9781412963909 Given L (2008) The SAGE Encyclopedia of Qualitative Research Methods. SAGE Publications, Inc., 10.4135/9781412963909
Zurück zum Zitat IEEE (2018) Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. resreport, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, https://tinyurl.com/y2egvzxx IEEE (2018) Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. resreport, The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, https://​tinyurl.​com/​y2egvzxx
Zurück zum Zitat Jones SE (2006) Against Technology: From the Luddites to Neo-Luddism. Taylor & Francis Jones SE (2006) Against Technology: From the Luddites to Neo-Luddism. Taylor & Francis
Zurück zum Zitat Karnouskos S (2017) The Interplay of Law, Robots and Society, in an Artificial Intelligence Era. Master’s thesis, Umeå University, Sweden Karnouskos S (2017) The Interplay of Law, Robots and Society, in an Artificial Intelligence Era. Master’s thesis, Umeå University, Sweden
Zurück zum Zitat Kurzweil R (2006) The Singularity Is Near: When Humans Transcend Biology. Penguin (Non-Classics) Kurzweil R (2006) The Singularity Is Near: When Humans Transcend Biology. Penguin (Non-Classics)
Zurück zum Zitat Lin P, Abney K, Bekey GA (2012) Robot Ethics:The Ethical and Social Implications of Robotics. MIT Press Lin P, Abney K, Bekey GA (2012) Robot Ethics:The Ethical and Social Implications of Robotics. MIT Press
Zurück zum Zitat Malle BF, Scheutz M, Arnold T, Voiklis J, Cusimano C (2015) Sacrifice one for the good of many? Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, ACM, DOI 10(1145/2696454):2696458 Malle BF, Scheutz M, Arnold T, Voiklis J, Cusimano C (2015) Sacrifice one for the good of many? Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, ACM, DOI 10(1145/2696454):2696458
Zurück zum Zitat Mansell W, Meteyard B, Thomson A (2015) A Critical Introduction to Law, 4th edn. Routledge Mansell W, Meteyard B, Thomson A (2015) A Critical Introduction to Law, 4th edn. Routledge
Zurück zum Zitat Manyika J, Chui M, Miremadi M, Bughin J, George K, Willmott P, Dewhurst M (2017) A future that works: automation, employment, and productivity. Tech. rep., McKinsey Global Institute, https://tinyurl.com/ycuxnkjb Manyika J, Chui M, Miremadi M, Bughin J, George K, Willmott P, Dewhurst M (2017) A future that works: automation, employment, and productivity. Tech. rep., McKinsey Global Institute, https://​tinyurl.​com/​ycuxnkjb
Zurück zum Zitat Massaro TM, Norton H (2016) Siri-ously? Free Speech Rights and Artificial Intelligence. Northwestern University Law Review 110(5) Massaro TM, Norton H (2016) Siri-ously? Free Speech Rights and Artificial Intelligence. Northwestern University Law Review 110(5)
Zurück zum Zitat Miyake N, Ishiguro H, Dautenhahn K, Nomura T (2011) Robots with children: practices for human-robot symbiosis. In: Proceedings of the 6th international conference on Human-robot interaction - HRI ’11, ACM Press, 10.1145/1957656.1957659 Miyake N, Ishiguro H, Dautenhahn K, Nomura T (2011) Robots with children: practices for human-robot symbiosis. In: Proceedings of the 6th international conference on Human-robot interaction - HRI ’11, ACM Press, 10.1145/1957656.1957659
Zurück zum Zitat Nomura T, Sugimoto K, Syrdal DS, Dautenhahn K (2012) Social acceptance of humanoid robots in japan: A survey for development of the frankenstein syndorome questionnaire. In: 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012), IEEE, 10.1109/humanoids.2012.6651527 Nomura T, Sugimoto K, Syrdal DS, Dautenhahn K (2012) Social acceptance of humanoid robots in japan: A survey for development of the frankenstein syndorome questionnaire. In: 2012 12th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2012), IEEE, 10.1109/humanoids.2012.6651527
Zurück zum Zitat Palmerinid E, et al. (2014) Guidelines on regulating robotics. Tech. rep., Regulating Emerging Robotic Technologies in Europe: Robotics facing Law and Ethics (RoboLaw) Project, https://tinyurl.com/mso7n6c Palmerinid E, et al. (2014) Guidelines on regulating robotics. Tech. rep., Regulating Emerging Robotic Technologies in Europe: Robotics facing Law and Ethics (RoboLaw) Project, https://​tinyurl.​com/​mso7n6c
Zurück zum Zitat Rossi F (2016) Artificial Intelligence: Potential Benefits and Ethical Considerations. Tech. rep., Policy Department C: Citizens’ Rights and Constitutional Affairs, European Parliament, https://tinyurl.com/y3tl6esk Rossi F (2016) Artificial Intelligence: Potential Benefits and Ethical Considerations. Tech. rep., Policy Department C: Citizens’ Rights and Constitutional Affairs, European Parliament, https://​tinyurl.​com/​y3tl6esk
Zurück zum Zitat Russell S, Norvig P (2010) Artificial Intelligence: A Modern Approach, 3rd edn. Prentice Hall Series in Artificial Intelligence, Prentice Hall Russell S, Norvig P (2010) Artificial Intelligence: A Modern Approach, 3rd edn. Prentice Hall Series in Artificial Intelligence, Prentice Hall
Zurück zum Zitat Rödel C, Stadler S, Meschtscherjakov A, Tscheligi M (2014) Towards autonomous cars. In: Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, ACM, 10.1145/2667317.2667330 Rödel C, Stadler S, Meschtscherjakov A, Tscheligi M (2014) Towards autonomous cars. In: Proceedings of the 6th International Conference on Automotive User Interfaces and Interactive Vehicular Applications, ACM, 10.1145/2667317.2667330
Zurück zum Zitat Scheutz M, Arnold T (2016) Are we ready for sex robots? In: 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, HRI ’16, 10.1109/hri.2016.7451772 Scheutz M, Arnold T (2016) Are we ready for sex robots? In: 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, HRI ’16, 10.1109/hri.2016.7451772
Zurück zum Zitat Shay LA, Hartzog W, Nelson J, Larkin D, Conti G (2016) Robot law, Edward Elgar Publishing, chap Confronting automated law enforcement, pp 235–273. 10.4337/9781783476732.00019 Shay LA, Hartzog W, Nelson J, Larkin D, Conti G (2016) Robot law, Edward Elgar Publishing, chap Confronting automated law enforcement, pp 235–273. 10.4337/9781783476732.00019
Zurück zum Zitat US NSTC (2016) Artificial Intelligence, Automation, and the Economy. Tech. rep., US Executive Office of the President, National Science and Technology Council Committee on Technology (NSTC), https://tinyurl.com/h4ekpt2 US NSTC (2016) Artificial Intelligence, Automation, and the Economy. Tech. rep., US Executive Office of the President, National Science and Technology Council Committee on Technology (NSTC), https://​tinyurl.​com/​h4ekpt2
Zurück zum Zitat Zhang D, Mishra S, Brynjolfsson E, Etchemendy J, Ganguli D, Grosz B, Lyons T, Manyika J, Niebles JC, Sellitto M, Shoham Y, Clark J, , Perrault R (2021) The Artificial Intelligence 2021 Index Report. techreport, Stanford University, https://aiindex.stanford.edu/report/ Zhang D, Mishra S, Brynjolfsson E, Etchemendy J, Ganguli D, Grosz B, Lyons T, Manyika J, Niebles JC, Sellitto M, Shoham Y, Clark J, , Perrault R (2021) The Artificial Intelligence 2021 Index Report. techreport, Stanford University, https://​aiindex.​stanford.​edu/​report/​
Metadaten
Titel
Symbiosis with artificial intelligence via the prism of law, robots, and society
verfasst von
Stamatis Karnouskos
Publikationsdatum
13.05.2021
Verlag
Springer Netherlands
Erschienen in
Artificial Intelligence and Law / Ausgabe 1/2022
Print ISSN: 0924-8463
Elektronische ISSN: 1572-8382
DOI
https://doi.org/10.1007/s10506-021-09289-1

Weitere Artikel der Ausgabe 1/2022

Artificial Intelligence and Law 1/2022 Zur Ausgabe

Premium Partner