Skip to main content

2020 | Buch

Artificial General Intelligence

13th International Conference, AGI 2020, St. Petersburg, Russia, September 16–19, 2020, Proceedings

herausgegeben von: Ben Goertzel, Aleksandr I. Panov, Alexey Potapov, Prof. Dr. Roman Yampolskiy

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 13th International Conference on Artificial General Intelligence, AGI 2020, held in St. Petersburg, Russia, in September 2020.

The 30 full papers and 8 short papers presented in this book were carefully reviewed and selected from 60 submissions. The papers cover topics such as AGI architectures, artificial creativity and AI safety, transfer learning, AI unification and benchmarks for AGI.

Inhaltsverzeichnis

Frontmatter
AGI and the Knight-Darwin Law: Why Idealized AGI Reproduction Requires Collaboration

Can an AGI create a more intelligent AGI? Under idealized assumptions, for a certain theoretical type of intelligence, our answer is: “Not without outside help”. This is a paper on the mathematical structure of AGI populations when parent AGIs create child AGIs. We argue that such populations satisfy a certain biological law. Motivated by observations of sexual reproduction in seemingly-asexual species, the Knight-Darwin Law states that it is impossible for one organism to asexually produce another, which asexually produces another, and so on forever: that any sequence of organisms (each one a child of the previous) must contain occasional multi-parent organisms, or must terminate. By proving that a certain measure (arguably an intelligence measure) decreases when an idealized parent AGI single-handedly creates a child AGI, we argue that a similar Law holds for AGIs.

Samuel Allen Alexander
Error-Correction for AI Safety

The complex socio-technological debate underlying safety-critical and ethically relevant issues pertaining to AI development and deployment extends across heterogeneous research subfields and involves in part conflicting positions. In this context, it seems expedient to generate a minimalistic joint transdisciplinary basis disambiguating the references to specific subtypes of AI properties and risks for an error-correction in the transmission of ideas. In this paper, we introduce a high-level transdisciplinary system clustering of ethical distinction between antithetical clusters of Type I and Type II systems which extends a cybersecurity-oriented AI safety taxonomy with considerations from psychology. Moreover, we review relevant Type I AI risks, reflect upon possible epistemological origins of hypothetical Type II AI from a cognitive sciences perspective and discuss the related human moral perception. Strikingly, our nuanced transdisciplinary analysis yields the figurative formulation of the so-called AI safety paradox identifying AI control and value alignment as conjugate requirements in AI safety. Against this backdrop, we craft versatile multidisciplinary recommendations with ethical dimensions tailored to Type II AI safety. Overall, we suggest proactive and importantly corrective instead of prohibitive methods as common basis for both Type I and Type II AI safety.

Nadisha-Marie Aliman, Pieter Elands, Wolfgang Hürst, Leon Kester, Kristinn R. Thórisson, Peter Werkhoven, Roman Yampolskiy, Soenke Ziesche
Artificial Creativity Augmentation

Creativity has been associated with multifarious descriptions whereby one exemplary common definition depicts creativity as the generation of ideas that are perceived as both novel and useful within a certain social context. In the face of adversarial conditions taking the form of global societal challenges from climate change over AI risks to technological unemployment, this paper motivates future research on artificial creativity augmentation (ACA) to indirectly support the generation of requisite defense strategies and solutions. This novel term is of ambiguous nature since it subsumes two research directions: (1) artificially augmenting human creativity, but also (2) augmenting artificial creativity. In this paper, we examine and extend recent creativity research findings from psychology and cognitive neuroscience to identify potential indications on how to work towards (1). Moreover, we briefly analyze how research on (1) could possibly inform progress towards (2). Overall, while human enhancement but also the implementation of powerful AI are often perceived as ethically controversial, future ACA research could even appear socially desirable.

Nadisha-Marie Aliman, Leon Kester
The Hierarchical Memory Based on Compartmental Spiking Neuron Model

The paper proposes the architecture of dynamically changing hierarchical memory based on compartmental spiking neuron model. The aim of the study is to create biologically-inspired memory models suitable for implementing the processes of features memorizing and high-level concepts. The presented architecture allows us to describe the bidirectional hierarchical structure of associative concepts related both in terms of generality and in terms of part-whole, with the ability to restore information both in the direction of generalization and in the direction of decomposition of the general concept into its component parts. A feature of the implementation is the use of a compartmental neuron model, which allows the use of a neuron to memorize objects by adding new sections of the dendritic tree. This opens the possibility of creating neural structures that are adaptive to significant changes in the environment.

Aleksandr Bakhshiev, Anton Korsakov, Lev Stankevich
The Dynamics of Growing Symbols: A Ludics Approach to Language Design by Autonomous Agents

Even with the relative ascendancy of sub-symbolic approaches to AI, the symbol grounding problem presents an ongoing challenge to work in artificial general intelligence. Prevailing ontology design practices typically presuppose the transparency of the relation between semantics and syntax by transcendently stipulating it extrinsic to the system, rather than providing a platform for the internal development of this relation through agents’ interactions endogenous to the system. Drawing on theoretical resources from ecological psychology, dynamical systems theory, and interactive computation, this work suggests an inversion of the symbol grounding problem in order to analyze how the symbolic regime can emerge out of causally embedded dynamical interactions within a system of autonomous intelligent agents. Under this view, syntactic-symbols come to be stabilized from other signs as constraints harnessing the dynamics of agents’ inter-actions, where the functional effects generated by such constrained dynamics give rise to an internal characterization of semantics broadly aligned with Brandom’s semantic pragmatism. Finally, ludics—a protological framework based on interactive computation—provides a formal model to concretely describe this continuity between syntax and semantics arising within and through the regulative dynamics of interactions in a multi-agent system. Accordingly, this bottom-up approach to grounding the symbolic order in dynamics could provide the conditions for artificial agents to engage in autonomous language design, thus equipping themselves with a powerful cognitive technology for intersubjective coordination and (re)structuring ontologies within a community of agents.

Skye Bougsty-Marshall
Approach for Development of Engineering Tools Based on Knowledge Graphs and Context Separation

During the development of complex multidisciplinary systems, engineers often have problems associated with the complexity of a target system and interactions of different groups of engineers and subcontractors.In potential, besides engineers and subcontractors, the new agents can participate in such interactions, like AGI.This paper presents ideas on how to develop engineering tools based on knowledge graphs to manage this complexity. This paper proposes the approach to make possible to agents that have different cognition contexts understand each other and a simple data format of context separation. At the end of the article, we have tried to show the example of how this approach can be applied to make a tool that uses engineering data in the proposed format.

Nikita Debelov, Petr Mukhachev, Anton Ivanov
Towards Dynamic Process Composition in the DSO Cognitive Architecture

In recent works, the DSO Cognitive Architecture’s design is enhanced by incorporating the concept of the Global Workspace Theory (GWT). The theory proposes that consciousness is realised through the competition of massive, specialised, parallelised processes and thus parallelised, unsynchronised cognitive processes become sequential through such bottleneck. Due to the concurrent nature of DSO Cognitive Architecture, coordination of the different parallel processes through this competition mechanism can be difficult and if not handled properly, will create inconsistent results. In this work, we propose a preliminary framework to coordinate the different processes by process composition which borrows concepts from automated planning. Processes, its argument signature and its output are abstracted into higher level type abstractions which can be used to compose with other processes based on matching the output types to argument types. This is known as process composition and it represents a sketch of how different process can coordinate with one another. We combined this with the current design of the DSO Cognitive Architecture and illustrate an example in crowd anomaly detection.

Zhiyuan Du, Khin Hua Ng
SAGE: Task-Environment Platform for Evaluating a Broad Range of AI Learners

While several tools exist for training and evaluating narrow machine learning (ML) algorithms, their design generally does not follow a particular or explicit evaluation methodology or theory. Inversely so for more general learners, where many evaluation methodologies and frameworks have been suggested, but few specific tools exist. In this paper we introduce a new framework for broad evaluation of artificial intelligence (AI) learners, and a new tool that builds on this methodology. The platform, called SAGE (Simulator for Autonomy & Generality Evaluation), works for training and evaluation of a broad range of systems and allows detailed comparison between narrow and general ML and AI. It provides a variety of tuning and task construction options, allowing isolation of single parameters across complexity dimensions. SAGE is aimed at helping AI researchers map out and compare strengths and weaknesses of divergent approaches. Our hope is that it can help deepen understanding of the various tasks we want AI systems to do and the relationship between their composition, complexity, and difficulty for various AI systems, as well as contribute to building a clearer research road map for the field. This paper provides an overview of the framework and presents results of an early use case.

Leonard M. Eberding, Kristinn R. Thórisson, Arash Sheikhlar, Sindri P. Andrason
Post-turing Methodology: Breaking the Wall on the Way to Artificial General Intelligence

This article offers comprehensive criticism of the Turing test and develops quality criteria for new artificial general intelligence (AGI) assessment tests. It is shown that the prerequisites A. Turing drew upon when reducing personality and human consciousness to “suitable branches of thought” reflected the engineering level of his time. In fact, the Turing “imitation game” employed only symbolic communication and ignored the physical world. This paper suggests that by restricting thinking ability to symbolic systems alone Turing unknowingly constructed “the wall” that excludes any possibility of transition from a complex observable phenomenon to an abstract image or concept. It is, therefore, sensible to factor in new requirements for AI (artificial intelligence) maturity assessment when approaching the Turing test. Such AI must support all forms of communication with a human being, and it should be able to comprehend abstract images and specify concepts as well as participate in social practices.

Albert Efimov
Self-explaining AI as an Alternative to Interpretable AI

The ability to explain decisions made by AI systems is highly sought after, especially in domains where human lives are at stake such as medicine or autonomous vehicles. While it is often possible to approximate the input-output relations of deep neural networks with a few human-understandable rules, the discovery of the double descent phenomena suggests that such approximations do not accurately capture the mechanism by which deep neural networks work. Double descent indicates that deep neural networks typically operate by smoothly interpolating between data points rather than by extracting a few high level rules. As a result, neural networks trained on complex real world data are inherently hard to interpret and prone to failure if asked to extrapolate. To show how we might be able to trust AI despite these problems we introduce the concept of self-explaining AI. Self-explaining AIs are capable of providing a human-understandable explanation of each decision along with confidence levels for both the decision and explanation. Some difficulties with this approach along with possible solutions are sketched. Finally, we argue it is important that deep learning based systems include a “warning light” based on techniques from applicability domain analysis to warn the user if a model is asked to extrapolate outside its training distribution.

Daniel C. Elton
AGI Needs the Humanities

Central scholars in AI have argued for extending the search for new AI technology beyond the tried-and-tested biologically and mathematically-inspired algorithms. Following in their footsteps, areas in the humanities are introduced as possible inspirations for novel human-like AI. Topics discussed include play-acting, literature as the field researching both imagination and metaphors, linguistics, music, and hermeneutics. In our ambition to reach general intelligence, we cannot afford to ignore these avenues of research.

Sam Freed
Report on “AI and Human Thought and Emotion”

No fundamental new ideas have appeared in AI for decades because of a deadlocked discussion between the technologists and their philosophical critics. Both sides claim possession of the one (dogmatic) truth: Technologists are committed to writing code, while critics insist that AI bears no resemblance to how humans cope in the world. The book charts a middle course between the critics and practitioners of AI, remaining committed to writing code while maintaining a fixed gaze on the human condition. This is done by reviving a technique long-shunned in cognitive science: Introspection. Introspection was rejected as a scientific method since 1913, but technology is committed to “what works” rather than to science’s “best explanation”. Introspection is shown to be both a legitimate and a promising source of ideas for AI. The book details the development process of AI based on introspection, from the initial introspective descriptions to working code.This book is unique in that it starts with philosophical (and historical) discussions, and ends with examples of working novel algorithms. The book was originally a PhD thesis. It was edited for book form with two new chapters added.

Sam Freed
Cognitive Machinery and Behaviours

In this paper we propose to merge theories and principles explored in artificial intelligence and cognitive sciences into a reference architecture for human-level cognition or AGI. We describe a functional model of information processing systems inspired by several established theories: deep reinforcement learning mechanisms and grounded cognition theories from artificial intelligence research; dual-process theory from psychology; global-workspace theory, somatic markers hypothesis, and Hebbian theory from neurobiology; mind-body problem from philosophy. We use a formalism inspired by flow-graph and cybernetics representations. We called our proposed model IPSEL for Information Processing System with Emerging Logic. Its main assumption is on the emergence of a symbolic form of process from a connectionist activity guided by a self-generated evaluation signal. We also discuss artificial equivalents of concepts elaboration, common-sense and social interactions. This transdisciplinary work can be considered as a proposition for an artificial general intelligence design. It contains elements that will be implemented on further experiments. Its current aim is to be an analyzing tool for Human interactions with present and future artificial intelligence systems and a formal base for discussion of AGI features.

Bryan Fruchart, Benoit Le Blanc
Combinatorial Decision Dags: A Natural Computational Model for General Intelligence

A novel computational model (CoDD) utilizing combinatory logic to create higher-order decision trees is presented. A theoretical analysis of general intelligence in terms of the formal theory of pattern recognition and pattern formation is outlined, and shown to take especially natural form in the case where patterns are expressed in CoDD language. Relationships between logical entropy and algorithmic information, and Shannon entropy and runtime complexity, are shown to be elucidated by this approach. Extension to the quantum computing case is also briefly discussed.

Ben Goertzel
What Kind of Programming Language Best Suits Integrative AGI?

What kind of programming language would be most appropriate to serve the needs of integrative, multi-paradigm, multi-software-system approaches to AGI? This question is broached via exploring the more particular question of how to create a more scalable and usable version of the “Atomese” programming language that forms a key component of the OpenCog AGI design (an “Atomese 2.0”). It is tentatively proposed that The core of Atomese 2.0 should be a very flexible framework of rewriting rules for rewriting a metagraph (where the rules themselves are represented within the same metagraph, and some of the intermediate data created and used during the rule-interpretation process may be represented in the same metagraph). This framework should (among other requirements) support concurrent rewriting of the metagraph according to rules that are labeled with various sorts of uncertainty-quantifications, and that are labeled with various sorts of types associated with various type systems. A gradual typing approach should be used to enable mixture of rules and other metagraph nodes/links associated with various type systems, and untyped metagraph nodes/links not associated with any type system. allow reasonable efficiency and scalability, including in concurrent and distributed processing contexts, in the case where a large percentage of processing time is occupied with evaluating static pattern-matching queries on specific subgraphs of a large metagraph (including a rich variety of queries such as matches against nodes representing variables, and matches against whole subgraphs, etc.) allow efficient and convenient invocation and manipulation of external libraries for carrying out processing that is not efficiently done in Atomese directly Among the formalisms we will very likely want to implement within this framework is probabilistic dependent-linear-typed lambda calculus or something similar, perhaps with a Pure IsoType approach to dependent type inheritance. Thus we want the general framework to support reasonably efficient/convenient operations within this particular formalism, as an example.

Ben Goertzel
Guiding Symbolic Natural Language Grammar Induction via Transformer-Based Sequence Probabilities

A novel approach to automated learning of syntactic rules governing natural languages is proposed, based on using probabilities assigned to sentences (and potentially longer word sequences) by transformer neural network language models to guide symbolic learning processes like clustering and rule induction. This method exploits the learned linguistic knowledge in transformers, without any reference to their inner representations; hence, the technique is readily adaptable to the continuous appearance of more powerful language models. We show a proof-of-concept example of our proposed technique, using it to guide unsupervised symbolic link-grammar induction methods drawn from our prior research.

Ben Goertzel, Andrés Suárez-Madrigal, Gino Yu
Embedding Vector Differences Can Be Aligned with Uncertain Intensional Logic Differences

The DeepWalk algorithm is used to assign embedding vectors to nodes in the Atomspace weighted, labeled hypergraph that is used to represent knowledge in the OpenCog AGI system, in the context of an application to probabilistic inference regarding the causes of longevity based on data from biological ontologies and genomic analyses. It is shown that vector difference operations between embedding vectors are, in appropriate conditions, approximately alignable with “intensional difference” operations between the hypergraph nodes corresponding to the embedding vectors. This relationship hints at a broader functorial mapping between uncertain intensional logic and vector arithmetic, and opens the door for using embedding vector algebra to guide intensional inference control.

Ben Goertzel, Mike Duncan, Debbie Duong, Nil Geisweiller, Hedra Seid, Abdulrahman Semrie, Man Hin Leung, Matthew Ikle’
Delta Schema Network in Model-Based Reinforcement Learning

This work is devoted to unresolved problems of Artificial General Intelligence - the inefficiency of transfer learning. One of the mechanisms that are used to solve this problem in the area of reinforcement learning is a model-based approach. In the paper we are expanding the schema networks method which allows to extract the logical relationships between objects and actions from the environment data. We present algorithms for training a Delta Schema Network (DSN), predicting future states of the environment and planning actions that will lead to positive reward. DSN shows strong performance of transfer learning on the classic Atari game environment.

Andrey Gorodetskiy, Alexandra Shlychkova, Aleksandr I. Panov
Information Digital Twin—Enabling Agents to Anticipate Changes in Their Tasks

Agents are designed to perform specific tasks. The agent developers define the agent’s environment, the task states, the possible actions to navigate the different states, and the sensors and effectors necessary for it to perform its task. Once trained and deployed, the agent is monitored to ensure that it performs as designed. During operations, some changes that were not foreseen in the task design might negatively impact the agent performance. In this case, the agent operator would capture the performance drop, identify possible causes, and work with the agent developer to update the agent design. This model works well in centralized environments. However, agents are increasingly deployed in decentralized, dynamic environments, where changes are not centrally coordinated. In this case, updating agent task design to accommodate unforeseen changes might require a considerable effort from the agent operators. The paper suggests an approach to enable agents to anticipate and identify deviations in their performance on their own, thus improving the process of adapting to changes. The approach introduces an additional machine learning-based component—we call information digital twin (IDT)—dedicated to predicting task changes. That is, an agent would then have two components: the original component, which focuses on finding the best actions to achieve the agent task, and the IDT, dedicated to detecting changes impacting the agent task. Considering general artificial intelligence agents—where an agent might manage different tasks in various domains—the proposed IDT might be a component that enables AGI agents to ensure their performance against changes.

Wael Hafez
‘OpenNARS for Applications’: Architecture and Control

A pragmatic design for a general purpose reasoner incorporating the Non-Axiomatic Logic (NAL) and Non-Axiomatic Reasoning System (NARS) theory. The architecture and attentional control differ in many respects to the OpenNARS implementation. Key changes include; an event driven control process, separation of sensorimotor from semantic inference and a different handling of resource constraints.

Patrick Hammer, Tony Lofthouse
Towards AGI Agent Safety by Iteratively Improving the Utility Function

While it is still unclear if agents with Artificial General Intelligence (AGI) could ever be built, we can already use mathematical models to investigate potential safety systems for these agents. We present work on an AGI safety layer that creates a special dedicated input terminal to support the iterative improvement of an AGI agent’s utility function. The humans who switched on the agent can use this terminal to close any loopholes that are discovered in the utility function’s encoding of agent goals and constraints, to direct the agent towards new goals, or to force the agent to switch itself off.An AGI agent may develop the emergent incentive to manipulate the above utility function improvement process, for example by deceiving, restraining, or even attacking the humans involved. The safety layer will partially, and sometimes fully, suppress this dangerous incentive.This paper generalizes earlier work on AGI emergency stop buttons. We aim to make the mathematical methods used to construct the layer more accessible, by applying them to an MDP model. We discuss two provable properties of the safety layer, identify still-open issues, and present ongoing work to map the layer to a Causal Influence Diagram (CID).

Koen Holtman
Learning to Model Another Agent’s Beliefs: A Preliminary Approach

It is often useful for one agent to predict what another agent will believe after receiving new information. In fact, in order to appear intelligent in situations involving multiple interacting agents, we fundamentally need to be able to predict how changes in the world will affect the beliefs of others. This process involves two distinct processes. First, we need to devise a model that captures the way that beliefs change in response to new information. Second, we need to observe the behaviour of individual agents to determine their specific beliefs. In the AI literature, these problems have been addressed by distinct communities. In this paper, we bring these two communities together by demonstrating how an agent can learn a model of belief change from observed behaviour. We argue that this process is essential for natural interaction in an AGI setting, but it has not been addressed to date in a unified manner.

Aaron Hunter, Paul McCarlie
An Attentional Control Mechanism for Reasoning and Learning

This paper discuses attentional control mechanism of several systems in context of Artificial General Intelligence. Attentional control mechanism of OpenNARS, an implementation of Non-Axiomatic Reasoning System for research purposes is being introduced with description of the related functions and demonstration examples. Paper also implicitly compares OpenNARS attentional mechanism with the one found in other Artificial General Intelligence systems.

Peter Isaev, Patrick Hammer
Hyperdimensional Representations in Semiotic Approach to AGI

The paper is dedicated to the use of distributed hyperdimensional vectors to represent sensory information in the sign-based cognitive architecture, in which the image component of a sign is encoded by a causal matrix. The hyperdimensional representation allows us to update the precedent dimension of the causal matrix and accumulate information in it during the interaction of the system with the environment. Due to the high dimensionality of vectors, it is possible to reduce the representation and reasoning on the entities related to them to simple operations on vectors. In this work we show how hyperdimensional representations are embedded in an existing sign formalism and provide examples of visual scene encoding.

Alexey K. Kovalev, Aleksandr I. Panov, Evgeny Osipov
The Conditions of Artificial General Intelligence: Logic, Autonomy, Resilience, Integrity, Morality, Emotion, Embodiment, and Embeddedness

There are different difficulties in defining a fundamental concept; it often happens that some conditions are too strong or just surplus, and others are too weak or just lacking. There is no clearly agreed conception of intelligence, let alone artificial intelligence and artificial general intelligence. Still it can be significant and useful to (attempt to) elucidate the defining or possible characteristics of a fundamental concept. In the present paper we discuss the conditions of artificial general intelligence, some of which may be too strong and others of which may be too weak. Among other things, we focus upon logic, autonomy, resilience, integrity, morality, emotion, embodiment, and embeddedness, and articulate the nature of them from different conceptual points of view. And we finally discuss how to test artificial general intelligence, proposing a new kind of Turing-type tests based upon the intelligence-for-survival view. Overall, we believe that explicating the nature of artificial general intelligence arguably contributes to a deeper understanding of intelligence per se.

Yoshihiro Maruyama
Position Paper: The Use of Engineering Approach in Creation of Artificial General Intelligence

A possible practical engineering approach to creation of the general artificial intelligence is considered. The choice of approach is based on modular hierarchical representation of knowledge, where each module uses its own methods of representation and knowledge processing. Work with knowledge is done by a hierarchical multi-agent system. The description of system’s individual elements and information about the current development state are given.

Vasiliy Mazin
How Do You Test the Strength of AI?

Creating Strong AI means to develop artificial intelligence to the point where the machine’s intellectual capability is in a way equal to a human’s. Science is definitely one of the summits of human intelligence, the other being the art. Scientific research consists in creating hypotheses that are limited applicability models (methods) implying lossy information compression. In this article, we show that this paradigm is not unique to the science and is common to the most developed areas of human activities, like business and engineering. Thus, we argue, a Strong AI should possess a capability to build such models. Still, the known tests to confirm the human-level AI do not address this consideration. Based on the above we suggest a series of six tests of rising complexity to check if AI have achieved the human-level intelligence (Explanation, Problem-setting, Refutation, New phenomenon prediction, Business creation, Theory creation), five of which are new to the AGI literature.

Nikolay Mikhaylovskiy
Omega: An Architecture for AI Unification

We introduce the open-ended, modular, self-improving Omega AI unification architecture which is a refinement of Solomonoff’s Alpha architecture, as considered from first principles. The architecture embodies several crucial principles of general intelligence including diversity of representations, diversity of data types, integrated memory, modularity, and higher-order cognition. We retain the basic design of a fundamental algorithmic substrate called an “AI kernel” for problem solving and basic cognitive functions like memory, and a larger, modular architecture that re-uses the kernel in many ways. Omega includes eight representation languages, which are briefly introduced. We review the broad software architecture, higher-order cognition, self-improvement, modular neural architectures, and intelligent agents.

Eray Özkural
Analyzing Elementary School Olympiad Math Tasks as a Benchmark for AGI

Many benchmarks and challenges for AI and AGI exist, which help to reveal both short- and long-term topics and directions of research. We analyze elementary school Olympiad math tasks as a possible benchmark for AGI that can occupy a certain free niche capturing some limitations of the existing neural and symbolic systems better than other existing both language understanding and mathematical tests. A detailed comparison and analysis of implications of AGI is provided.

Alexey Potapov, Oleg Scherbakov, Vitaly Bogdanov, Vita Potapova, Anatoly Belikov, Sergey Rodionov, Artem Yashenko
The Meaning of Things as a Concept in a Strong AI Architecture

Artificial intelligence becomes an integral part of human life. At the same time, modern widely used approaches, which work successfully due to the availability of enormous computing power, based on ideas about the work of the brain, suggested more than half a century ago. The proposed model describes the general principles of information processing by the human brain, taking into account the latest achievements. The neuroscientific grounding of this model and its applicability in the creation of AGI or Strong AI are discussed in the article. In this model, the cortical minicolumn is the primary computing processor that works with the semantic description of information. The minicolumn transforms incoming information into its interpretation to a specific context. In this way, a parallel verification of hypotheses of information interpretations is provided when comparing them with information in the memory of each minicolumn of the cortical zone, and, at the same time, determining a significant context is the information transformation rule. The meaning of information is defined as an interpretation that is close to the information available in the memory of a minicolumn. The behavior is a result of modeling of possible situations. Using this approach will allow creating a strong AI or AGI.

Alexey Redozubov, Dmitry Klepikov
Toward a General Believable Model of Human-Analogous Intelligent Socially Emotional Behavior

Social virtual actors need to interact with users emotionally, convincing them in their ability to understand human minds. For this to happen, an artificial emotional intelligence is needed, capable of believable behavior in real-life situations. Summarizing recent work of the authors, the present paper extends the general state-of-the-art framework of emotional AGI, using the emotional Biologically Inspired Cognitive Architecture (eBICA) as a basis. In addition to appraisals, other kinds of fluents are added to the model: somatic markers, feelings, emotional biases, moods, etc. Their integration is achieved on the basis of semantic maps and moral schemas. It is anticipated that this new level of artificial general socially emotional intelligence will complement the next-generation AGI, helping it to merge into the human society on equal with its human members.

Alexei V. Samsonovich, Arthur A. Chubarov, Daria V. Tikhomirova, Alexander A. Eidln
Autonomous Cumulative Transfer Learning

Autonomous knowledge transfer from a known task to a new one requires discovering task similarities and knowledge generalization without the help of a designer or teacher. How transfer mechanisms in such learning may work is still an open question. Transfer of knowledge makes most sense for learners for whom novelty is regular (other things being equal), as in the physical world. When new information must be unified with existing knowledge over time, a cumulative learning mechanism is required, increasing the breadth, depth, and accuracy of an agent’s knowledge over time, as experience accumulates. Here we address the requirements for what we refer to as autonomous cumulative transfer learning (ACTL) in novel task-environments, including implementation and evaluation criteria, and how it relies on the process of similarity and ampliative reasoning. While the analysis here is theoretical, the fundamental principles of the cumulative learning mechanism in our theory have been implemented and evaluated in a running system described priorly. We present arguments for the theory from an empirical as well as analytical viewpoint.

Arash Sheikhlar, Kristinn R. Thórisson, Leonard M. Eberding
New Brain Simulator II Open-Source Software

This paper introduces the open-source software project, Brain Simulator II, simplifying experimentation into various facets of AGI. The software seamlessly marries spiking neural networks with symbolic AI algorithms. It supports a large array of simple neurons (of various models) and groups of neurons collected into “Modules”, backed by custom software. 3D and 2D simulators allow a virtual entity to move about, have binocular vision and touch, and merge this information with spoken input. Information is captured in a Universal Knowledge Store module which represents information in links between nodes. Continuing development will enhance these capabilities.

Charles J. Simon
Experience-Specific AGI Paradigms

This position paper suggests the existence of a plurality of “general-purpose” AGI paradigms, each specific to a domain of experience. These paradigms are studied to answer the question of which AGI will be developed first. Finally, in order to make the case for AGI based on symbolic experience, preliminary results from Semiotic AI are discussed.

Valerio Targon
Psychological Portrait of a Virtual Agent in the Teleport Game Paradigm

The videogame platform Teleport created earlier allows us to study anonymous social interactions among actors of various nature: human and virtual actor, ensuring their indistinguishability, which implies believable behavior of a virtual actor. The present study found a connection between the human player behavior in the Teleport game and her psychological portrait constructed using the sixteen-factor Catell personality test for empathy. Assuming that this connection is extendable to perception of virtual actor behavior, the game sessions data was analyzed to infer behavioral characteristics of virtual actors. Based on this data analysis, we constructed psychological characteristics of models of a virtual actor (a bot). Partner and emotional characteristics of bots were defined, and their psychological portrait was constructed based on the registered bot behavior. Personal characteristics such as courage, sociability, calmness, balance, and loyalty were attributed to bots and compared to analogous characteristics of human players.

Daria V. Tikhomirova, Maria V. Zavrajnova, Ellina A. Rodkina, Yasamin Musayeva, Alexei V. Samsonovich
Logical Probabilistic Biologically Inspired Cognitive Architecture

We consider a task-oriented approach to AGI, when any cognitive problem, perhaps superior to human ability, has sense given a criterion of its solution. In the frame of this approach, we consider the task of purposeful behavior in a complex probabilistic environment, where behavior is organized through self-learning. For that purpose, we suggest cognitive architecture that relies on the theory of functional systems. The architecture is based on the main notions of this theory: goal, result, anticipation of the result. The logical structure of this theory was analyzed and used for the control system of purposeful behavior development. This control system contains a hierarchy of functional systems that organizes purposeful behavior. The control system was used for modeling agents to solve the foraging task.

Evgenii E. Vityaev, Alexander V. Demin, Yurii A. Kolonin
An Architecture for Real-Time Reasoning and Learning

This paper compares the various conceptions of “real-time” in the context of AI, as different ways of taking the processing time into consideration when problems are solved. An architecture of real-time reasoning and learning is introduced, which is one aspect of the AGI system NARS. The basic idea is to form problem-solving processes flexibly and dynamically at run time by using inference rules as building blocks and incrementally self-organizing the system’s beliefs and skills, under the restriction of time requirements of the tasks. NARS is designed under the Assumption of Insufficient Knowledge and Resources, which leads to an inherent ability to deal with varying situations in a timely manner.

Pei Wang, Patrick Hammer, Hongzheng Wang
A Model for Artificial General Intelligence

A recently developed Functional Modeling Framework suggests that all models of cognition can be represented by a minimally reducible set of functions, and proposes to define the criteria for a model of cognition to have the potential for the general problem solving ability commonly recognized as true human intelligence. This human-centric functional modeling approach is intended to enable different models of AGI to be more easily compared so research can reliably converge on a single understanding, enabling the possibility of massively collaborative interdisciplinary projects to research and implement models of consciousness or cognition where difficulty in communicating very different ideas, particularly in the case of new models without a significant following, has prevented such massive collaboration from in practice having proved possible before. This paper summarizes a model of cognition developed within this framework.

Andy E. Williams
Backmatter
Metadaten
Titel
Artificial General Intelligence
herausgegeben von
Ben Goertzel
Aleksandr I. Panov
Alexey Potapov
Prof. Dr. Roman Yampolskiy
Copyright-Jahr
2020
Electronic ISBN
978-3-030-52152-3
Print ISBN
978-3-030-52151-6
DOI
https://doi.org/10.1007/978-3-030-52152-3