Skip to main content

2020 | Buch

Artificial intelligence - When do machines take over?

verfasst von: Prof. Dr. Klaus Mainzer

Verlag: Springer Berlin Heidelberg

Buchreihe : Technik im Fokus

insite
SUCHEN

Über dieses Buch

Everybody knows them. Smartphones that talk to us, wristwatches that record our health data, workflows that organize themselves automatically, cars, airplanes and drones that control themselves, traffic and energy systems with autonomous logistics or robots that explore distant planets are technical examples of a networked world of intelligent systems. Machine learning is dramatically changing our civilization. We rely more and more on efficient algorithms, because otherwise we will not be able to cope with the complexity of our civilizing infrastructure. But how secure are AI algorithms? This challenge is taken up in the 2nd edition: Complex neural networks are fed and trained with huge amounts of data (big data). The number of necessary parameters explodes exponentially. Nobody knows exactly what is going on in these "black boxes". In machine learning we need more explainability and accountability of causes and effects in order to be able to decide ethical and legal questions of responsibility (e.g. in autonomous driving or medicine)! Besides causal learning, we also analyze procedures of tests and verification to get certified AI-programs. Since its inception, AI research has been associated with great visions of the future of mankind. It is already a key technology that will decide the global competition of social systems. "Artificial Intelligence and Responsibility" is another central supplement to the 2nd edition: How should we secure our individual liberty rights in the AI world? This book is a plea for technology design: AI must prove itself as a service in society.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction: What Is AI?
Abstract
That wasn’t a science fiction scenario. These were AI technologies that are technically feasible today and are being developed as part of computer science and engineering. Traditionally, AI (Artificial Intelligence) was understood as the simulation of intelligent human thought and action. This definition suffers from the fact that “intelligent human thinking” and “acting” are not defined. Furthermore, man is made the yardstick of intelligence, although evolution has produced many organisms with varying degrees of “intelligence”. In addition, we have long been surrounded in technology by “intelligent” systems that control our civilization independently and efficiently, but often differently from humans.
Klaus Mainzer
Chapter 2. A Short History of the AI
Abstract
In ancient language, an automaton is an apparatus that can act independently (autonomously). According to ancient understanding, self-activity characterizes living organisms. Reports on hydraulic and mechanical automats are already mentioned in ancient literature against the background of the technology of the time. In Jewish tradition, the Golem is described as a human-like machine at the end of the Middle Ages. The Golem can be programmed with combinations of letters from the “Book of Creation” (Hebrew: Sefer Jezira)—to protect the Jewish people in times of persecution.
At the beginning of modern times, automation was approached from a technical and scientific point of view. Leonardo da Vinci’s construction plans for vending machines are known from the Renaissance. In the Baroque era, slot machines were built on the basis of watchmaking technology. P. Jaquet-Droz designs a complicated clockwork that was built into a human doll. His “androids” play the piano, draw pictures and write sentences. The French physician and philosopher J. O. de Lamettrie sums up the concept of life and automata in the age of mechanics: “The human body is a machine that stretches its (drive) spring itself”.
Klaus Mainzer
Chapter 3. Logical Thinking Becomes Automatic
Abstract
In the first phase of AI research, the search for general problem-solving methods was successful at least in formal logic. A mechanical procedure was given to prove the logical truth of formulas. The procedure could also be carried out by a computer program and introduced automatic proving in computer science.
The basic idea is easy to understand. In algebra, letters x, y, z… are used by arithmetic operations such as add (+) or subtract (−). The letters serve as spaces (variables) to insert numbers. In formal logic, propositions are represented by variables A, B, C…, which are connected by logical connectives such as (∧), “or” (∨)′ “if-then” (→), “not” (¬). The propositional variables serve as blanks to use statements that are either true or false. For example, the logical formula A ∧ B, by using the true statements 1 + 3 = 4 for A and 4 = 2 + 2 for B, is transformed into the true statement 1 + 3 = 4 ∧ 4 = 2 + 2. In arithmetic, this leads to the true conclusion 1 + 3 = 4 ∧ 4 = 2 + 2→1 + 3 = 2 + 2. But, in general, the conclusion A ∧ B → C is not true? On the other hand, the conclusion A ∧ B → A logically generally valid, since for the insertion of any true or false statements for A and B there is always a true overall statement.
Klaus Mainzer
Chapter 4. Systems Become Experts
Abstract
Knowledge-based expert systems are computer programs that store and accumulate knowledge about a specific area, from which knowledge automatically draws conclusions in order to offer solutions to concrete problems in that area. In contrast to the human expert, however, the knowledge of an expert system is limited to a specialized information base without general and structural knowledge about the world.
In order to build an expert system, the knowledge of the expert must be laid down in rules, translated into a program language and processed with a problem-solving strategy. The architecture of an expert system therefore consists of the following components: Knowledge base, problem-solving component (derivation system), explanatory component, knowledge acquisition, dialogue component. The coordination of these components is shown in Fig. 4.1.
Knowledge is the key factor in the representation of an expert system. There are two types of knowledge. One kind of knowledge concerns the facts of the field of application, which are recorded in textbooks and journals. Equally important is the practice in the respective area of application as knowledge of the second kind. It is heuristic knowledge on which judgement and any successful problem-solving practice in the field of application are based. It is knowledge of experience, the art of successful presumption, which a human expert acquires only in many years of professional work.
Klaus Mainzer
Chapter 5. Computers Learn to Speak
Abstract
Against the background of knowledge-based systems, Turing’s famous question, which moved early AI researchers, can be taken up again: Can these systems “think”? Are they “intelligent”? The analysis shows that knowledge-based expert systems as well as conventional computer programs are based on algorithms. Even the separation of knowledge base and problem solving strategy does not change this, because both components of an expert system must be represented in algorithmic data structures in order to finally become programmable on a computer.
This also applies to the realization of natural language through computers. One example is J. Weizenbaum’s ELIZA language program. As a human expert, ELIZA will simulate a psychiatrist talking to a patient. These are rules on how to react to certain sentence patterns of the patient with certain sentence patterns of the “psychiatrist”. In general, it is about the recognition or classification of rules with regard to their applicability in situations. In the simplest case, the equality of two symbol structures must be determined, as determined by the EQUAL function in the LISP programming language for symbol lists. An extension exists if terms and variables are included in the symbolic expressions, e.g.
(x B C)
(A B y)
Klaus Mainzer
Chapter 6. Algorithms Simulate Evolution
Abstract
Information processing with computers and humans is reproduced with artificial or natural languages. They are only special cases of symbolic representation systems, which can also be specified for genetic information systems. Genetic languages, with their grammatical rules, represent molecular processes to generate molecular sequences with genetic meanings. The key to understanding these molecular languages is not us humans, but the molecular systems that make use of them. We humans with our kind of information processing are only at the beginning to decipher and understand these languages with their rules. The formal language and grammar theories, together with the algorithmic complexity theory, provide the first approaches.
For genetic information, the nucleic acid language with the alphabet of the four nucleotides and the amino acid language with the alphabet of the twenty amino acids are used. In the nucleic acid language, a hierarchy of different language layers can be distinguished, starting at the lowest level of the nucleotides with the basic symbols A, C, G, T or U to the highest level of the genes in which the complete genetic information of a cell is stored. Each intermediate language level consists of units of the previous language level and gives instructions for various functions such as, e.g., transcription or replication of sequences.
Klaus Mainzer
Chapter 7. Neuronal Networks Simulate Brains
Abstract
Brains are examples of complex information systems based on neuronal information processing. What distinguishes them from other information systems is their ability to cognition, emotion and consciousness. The term cognition (lat. cognoscere for “to recognize”, “to perceive”, “to know”) is used to describe abilities such as perception, learning, thinking, memory and language. Which synaptic signal processing processes underlie these processes? Which neuronal subsystems are involved?
Klaus Mainzer
Chapter 8. Robots Become Social
Abstract
With the increasing complexity and automation of technology, robots are becoming service providers for industrial society. The evolution of living organisms today inspires the construction of robotic systems for different purposes [1]. As the complexity and difficulty of the service task increases, the use of AI technology becomes unavoidable. And robots don’t have to look like humans. Just as airplanes do not look like birds, there are also other adapted shapes depending on their function. So the question arises for what purpose humanoid robots should possess which properties and abilities.
Humanoid robots should be able to act directly in the human environment. In the human environment, the environment is adapted to human proportions. The design ranges from the width of the corridors and the height of a stair step to the positions of door handles. For non-human robots (e.g. on wheels and with other grippers instead of hands) large investments for environmental changes would have to be made. In addition, all tools that humans and robots should use together are adapted to human needs. Not to be underestimated is the experience that humanoid forms psychologically facilitate the emotional handling of robots.
Klaus Mainzer
Chapter 9. Infrastructures Become Intelligent
Abstract
The nervous system of human civilization is now the Internet. Up to now, the Internet has only been a (“stupid”) database with signs and images whose meaning emerges in the user’s mind. In order to cope with the complexity of the data, the network must learn to recognize and understand meanings independently. This is already achieved by semantic networks that are equipped with expandable background information (ontologies, concepts, relation, facts) and logical reasoning rules in order to independently supplement incomplete knowledge and draw conclusions. For example, people can be identified, although the data entered directly only partially describe the person. Here again it becomes apparent that semantics and understanding of meanings do not depend on human consciousness.
Klaus Mainzer
Chapter 10. From Natural and Artificial Intelligence to Superintelligence?
Abstract
Classical AI research is based on the capabilities of a program-controlled computer which, according to Church’s thesis, is in principle equivalent to a Turing machine. According to Moore’s law, gigantic computing and storage capacities have thus been achieved, which only made possible the AI services of the WATSON supercomputer (cf. Sect. 5.2). But, the power of supercomputers has a price that the energy of a small town can match. All the more impressive are the human brains that realize the power of WATSON (e.g., speak and understand a natural language) with the energy consumption of an incandescent lamp. By then, at the latest, one is impressed by the efficiency of neuromorphic systems that have evolved in evolution. Is there a common principle underlying these evolutionary systems that we can use in AI?
Biomolecules, cells, organs, organisms, and populations are highly complex dynamic systems in which many elements interact. Complexity research in physics, chemistry, biology, and ecology deals with the question of how the interactions of many elements of a complex dynamic system (e.g., atoms in materials, biomolecules in cells, cells in organisms, organisms in populations) can lead to orders and structures, but also to chaos and decay.
Klaus Mainzer
Chapter 11. How Safe Is Artificial Intelligence?
Abstract
Machine learning dramatically changes our civilization. We rely more and more on efficient algorithms, because otherwise the complexity of our civilizing infrastructure would not be manageable: Our brains are too slow and hopelessly overwhelmed by the amount of data we have to deal with. But how secure are AI algorithms? In practical applications, learning algorithms refer to models of neural networks, which themselves are extremely complex. They are fed and trained with huge amounts of data. The number of necessary parameters explodes exponentially. Nobody knows exactly what happens in these “black boxes” in detail. A statistical trial-and-error procedure often remains. But how should questions of responsibility be decided in, e.g., autonomous driving or in medicine, if the methodological basics remain dark?
In machine learning with neural networks, we need more explainability and accountability of causes and effects in order to be able to decide ethical and legal questions of responsibility!
Klaus Mainzer
Chapter 12. Artificial Intelligence and Responsibility
Abstract
Artificial intelligence (AI) is an international future topic in research and technology, economy, and society. But research and technical innovation at AI are not enough. AI technology will dramatically change the way we live and work. The global competition of the social systems (e.g., Chinese state monopolism, US-American IT giants, European market economy with individual freedom rights) will decisively depend on how we position our European value system in the AI world.
Klaus Mainzer
Metadaten
Titel
Artificial intelligence - When do machines take over?
verfasst von
Prof. Dr. Klaus Mainzer
Copyright-Jahr
2020
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-59717-0
Print ISBN
978-3-662-59716-3
DOI
https://doi.org/10.1007/978-3-662-59717-0

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.