Artificial General Intelligence
12th International Conference, AGI 2019, Shenzhen, China, August 6–9, 2019, Proceedings
- 2019
- Book
- Editors
- Patrick Hammer
- Pulin Agrawal
- Dr. Ben Goertzel
- Matthew Iklé
- Book Series
- Lecture Notes in Computer Science
- Publisher
- Springer International Publishing
About this book
This book constitutes the refereed proceedings of the 12th International Conference on Artificial General Intelligence, AGI 2019, held in Shenzhen, China, in August 2019.
The 16 full papers and 5 poster papers presented in this book were carefully reviewed and selected from 30 submissions. The papers are covering AGI architectures, discussing mathematical foundations, philosophical foundations, safety and ethics, and developing ideas from neuroscience and cognitive science.
Advertisement
Table of Contents
-
Frontmatter
-
AGI Brain: A Learning and Decision Making Framework for Artificial General Intelligence Systems Based on Modern Control Theory
Mohammadreza AlidoustAbstractIn this paper a unified learning and decision making framework for artificial general intelligence (AGI) based on modern control theory is presented. The framework, called AGI Brain, considers intelligence as a form of optimality and tries to duplicate intelligence using a unified strategy. AGI Brain benefits from powerful modelling capability of state-space representation, as well as ultimate learning ability of the neural networks. The model emulates three learning stages of human being for learning its surrounding world. The model was tested on three different continuous and hybrid (continuous and discrete) Action/State/Output/Reward (ASOR) space scenarios in deterministic single-agent/multi-agent worlds. Successful simulation results demonstrate the multi-purpose applicability of AGI Brain in deterministic worlds. -
Augmented Utilitarianism for AGI Safety
Nadisha-Marie Aliman, Leon KesterAbstractIn the light of ongoing progresses of research on artificial intelligent systems exhibiting a steadily increasing problem-solving ability, the identification of practicable solutions to the value alignment problem in AGI Safety is becoming a matter of urgency. In this context, one preeminent challenge that has been addressed by multiple researchers is the adequate formulation of utility functions or equivalents reliably capturing human ethical conceptions. However, the specification of suitable utility functions harbors the risk of “perverse instantiation” for which no final consensus on responsible proactive countermeasures has been achieved so far. Amidst this background, we propose a novel non-normative socio-technological ethical framework denoted Augmented Utilitarianism which directly alleviates the perverse instantiation problem. We elaborate on how augmented by AI and more generally science and technology, it might allow a society to craft and update ethical utility functions while jointly undergoing a dynamical ethical enhancement. Further, we elucidate the need to consider embodied simulations in the design of utility functions for AGIs aligned with human values. Finally, we discuss future prospects regarding the usage of the presented scientifically grounded ethical framework and mention possible challenges. -
Orthogonality-Based Disentanglement of Responsibilities for Ethical Intelligent Systems
Nadisha-Marie Aliman, Leon Kester, Peter Werkhoven, Roman YampolskiyAbstractIn recent years, the implementation of meaningfully controllable advanced intelligent systems whose goals are aligned with ethical values as specified by human entities emerged as key subject of investigation of international relevance across diverse AI-related research areas. In this paper, we present a novel transdisciplinary and Systems Engineering oriented approach denoted “orthogonality-based disentanglement” which jointly tackles both the thereby underlying control problem and value alignment problem while unraveling the corresponding responsibilities of different stakeholders based on the distinction of two orthogonal axes assigned to the problem-solving ability of these intelligent systems on the one hand and to the ethical abilities they exhibit based on quantitatively encoded human values on the other hand. Moreover, we introduce the notion of explicitly formulated ethical goal functions ideally encoding what humans should want and exemplify a possible class of “self-aware” intelligent systems with the capability to reliably adhere to these human-defined goal functions. Beyond that, we discuss an attainable transformative socio-technological feedback-loop that could result out of the introduced orthogonality-based disentanglement approach and briefly elaborate on how the framework additionally provides valuable hints with regard to the coordination subtask in AI Safety. Finally, we point out remaining crucial challenges as incentive for future work. -
Extending MicroPsi’s Model of Motivation and Emotion for Conversational Agents
Joscha Bach, Murilo Coutinho, Liza LichtingerAbstractWe describe a model of emotion and motivation that extends the MicroPsi motivation model for applications in conversational agents and tracking human emotions. The model is based on reactions of the agent to satisfaction and frustration of physiological, cognitive or social needs, and to changes of the agent’s expectations regarding such events. The model covers motivational states, affective states (modulation of cognition), feelings (sensations that correspond to a situation appraisal), emotions (conceptual aggregates of motivational states, modulators and feelings) and is currently being adapted to express emotional states. -
WILLIAM: A Monolithic Approach to AGI
Arthur Franz, Victoria Gogulya, Michael LöfflerAbstractWe present WILLIAM – an inductive programming system based on the theory of incremental compression. It builds representations by incrementally stacking autoencoders made up of trees of general Python functions, thereby stepwise compressing data. It is able to solve a diverse set of tasks including the compression and prediction of simple sequences, recognition of geometric shapes, write code based on test cases, self-improve by solving some of its own problems and play tic-tac-toe when attached to AIXI and without being specifically programmed for it. -
An Inferential Approach to Mining Surprising Patterns in Hypergraphs
Nil Geisweiller, Ben GoertzelAbstractA novel pattern mining algorithm and a novel formal definition of surprisingness are introduced, both framed in the context of formal reasoning. Hypergraphs are used to represent the data in which patterns are mined, the patterns themselves, and the control rules for the pattern miner. The implementation of these tools in the OpenCog framework, as part of a broader multi-algorithm approach to AGI, is described. -
Toward Mapping the Paths to AGI
Ross Gruetzemacher, David ParadiceAbstractThere is substantial interest in the research community for a map of the paths to artificial general intelligence (AGI), however, no effort toward these ends has been entirely successful. This paper identifies an alternative technique called scenario network mapping that is well suited for the difficulties posed in mapping the paths to AGI. The method is discussed, and a modified version of scenario network mapping is proposed which is intended specifically for the purpose of mapping the paths to AGI. Finally, a scenario network mapping workshopping process is proposed to utilize this method and develop a map of the paths to AGI. This will hopefully lead to discussion and action in the research community for using it in a new effort to map the paths to AGI. -
Adaptive Neuro-Symbolic Network Agent
Patrick HammerAbstractThis paper describes Adaptive Neuro-Symbolic Network Agent, a new design of a sensorimotor agent that adapts to its environment by building concepts based on Sparse Distributed Representations of sensorimotor sequences. Utilizing Non-Axiomatic Reasoning System theory, it is able to learn directional correlative links between concept activations that were caused by the appearing of observed and derived event sequences. These directed correlations are encoded as predictive links between concepts, and the system uses them for directed concept-driven activation spreading, prediction, anticipatory control, and decision-making, ultimately allowing the system to operate autonomously, driven by current event and concept activity, while working under the Assumption of Insufficient Knowledge and Resources. -
An Experimental Study of Emergence of Communication of Reinforcement Learning Agents
Qiong Huang, Doya KenjiAbstractAbility to use language is an essential requirement for human-level intelligence. For artificial general intelligence, the ability to learn and to create language is even more important [1]. Most previous models of learning and emergence of language took successful communication itself as the task target. However, language, or communication in general, should have evolved to improve certain fitness of the population of agents. Here we consider whether and how a population of reinforcement learning agents can learn to send signals and to respond to signals for the sake of maximizing their own rewards. We take a communication game tested in human subjects [2, 3, 6], in which the aim of the game is for two players to meet together without knowing exact location of the other. In our decentralized reinforcement learning framework with communicative and physical actions [4], we tested how the number N of usable symbols affects whether the meeting task is successfully achieved and what kind of signaling and responding are learned. Even though \(N=2\) symbols are theoretically sufficient, the success rate was only 1 to 2%. With \(N=3\) symbols, success rate was more than 60% and three different signaling strategies were observed. The results indicate the importance of redundancy in signaling degrees of freedom and that a variety of signaling conventions can emerge in populations of simple independent reinforcement learning agents. -
Arbitrarily Applicable Relational Responding
Robert JohanssonAbstractThe purpose of this paper is to introduce how contemporary behavioral psychology approaches intelligence and higher-order cognitive tasks, as instances of so-called arbitrarily applicable relational responding (AARR). We introduce the contemporary theory Relational Frame Theory (RFT), that suggests that key properties of AARR are mutual entailment, combinatorial entailment, and transformation of stimulus function. Furthermore, AARR are contextually controlled and developed through multiple-exemplar training. We explain these concepts and provide examples of how RFT uses this framework to explain complex cognitive tasks such as language, analogies, a sense of Self, and implicit cognition. Applications of RFT are surveyed. Finally, the relevance of RFT for the AGI audience is discussed. -
Programmatic Link Grammar Induction for Unsupervised Language Learning
Alex Glushchenko, Andres Suarez, Anton Kolonin, Ben Goertzel, Oleg BaskovAbstractAlthough natural (i.e. human) languages do not seem to follow a strictly formal grammar, their structure analysis and generation can be approximated by one. Having such a grammar is an important tool for programmatic language understanding. Due to the huge number of natural languages and their variations, processing tools that rely on human intervention are available only for the most popular ones. We explore the problem of unsupervisedly inducing a formal grammar for any language, using the Link Grammar paradigm, from unannotated parses also obtained without supervision from an input corpus. The details of our state-of-the-art grammar induction technology and its evaluation techniques are described, as well as preliminary results of its application on both synthetic and real world text-corpora. -
Mental Actions and Modelling of Reasoning in Semiotic Approach to AGI
Alexey K. Kovalev, Aleksandr I. PanovAbstractThe article expounds the functional of a cognitive architecture Sign-Based World Model (SBWM) through the algorithm for the implementation of a particular case of reasoning. The SBWM architecture is a multigraph, called a semiotic network with special rules of activation spreading. In a semiotic network, there are four subgraphs that have specific properties and are composed of constituents of the main SBWM element – the sign. Such subgraphs are called causal networks on images, significances, personal meanings, and names. The semiotic network can be viewed as the memory of an intelligent agent. It is proposed to divide the agent’s memory in the SBWM architecture into a long-term memory consisting of signs-prototype, and a working memory consisting of signs-instance. The concept of elementary mental actions is introduced as an integral part of the reasoning process. Examples of such actions are provided. The performance of the proposed reasoning algorithm is considered by a model example. -
Embodiment as a Necessary a Priori of General Intelligence
David KremelbergAbstractThis paper presents the most important neuroscientific findings relevant to embodiment, including findings relating to the importance of embodiment in the development of higher-order cognitive functioning, including language, and discusses these findings in relation to Artificial General Intelligence (AGI). Research strongly suggests the necessity of embodiment in the individual development of advanced cognition. Generalizing from this body of literature, conclusions focus on the importance of incorporating a physical body in the development of AGI in a meaningful and profound way in order for AGI to be achieved. -
Computable Prediction
Kenshi MiyabeAbstractWe try to predict the next bit from a given finite binary string when the sequence is sampled from a computable probability measure on the Cantor space. There exists the best betting strategy among a class of effective ones up to a multiplicative constant, the induced prediction from which is called algorithmic probability or universal induction by Solomonoff. The prediction converges to the true induced measure for sufficiently random sequences. However, the prediction is not computable.We propose a framework to study the properties of computable predictions. We prove that all sufficiently general computable predictions also converge to the true induced measure. The class of sequences along which the prediction converges is related to computable randomness. We also discuss the speed of the convergence. We prove that, even when a computable prediction predicts a computable sequence, the speed of the convergence cannot be bounded by a computable function monotonically decreasing to 0. -
Cognitive Module Networks for Grounded Reasoning
Alexey Potapov, Anatoly Belikov, Vitaly Bogdanov, Alexander ScherbatiyAbstractThe necessity for neural-symbolic integration becomes evident as more complex problems like visual question answering are beginning to be addressed, which go beyond such limited-domain tasks as classification. Many existing state-of-the-art models are designed for a particular task or even benchmark, while general-purpose approaches are rarely applied to a wide variety of tasks demonstrating high performance. We propose a hybrid neural-symbolic framework, which tightly integrates the knowledge representation and symbolic reasoning mechanisms of the OpenCog cognitive architecture and one of the contemporary deep learning libraries, PyTorch, and show how to implement some existing particular models in our general framework. -
Generalized Diagnostics with the Non-Axiomatic Reasoning System (NARS)
Bill Power, Xiang Li, Pei WangAbstractSymbolic reasoning systems have leveraged propositional logic frameworks to build diagnostics tools capable of describing complex artifacts, while also allowing for a controlled and efficacious search over failure modes. These diagnostic systems represent a complex and varied context in which to explore general intelligence. This paper explores the application of a different reasoning system to such frameworks, specifically, the Non-Axiomatic Reasoning System. It shows how statements can be built describing an artifact, and that NARS is capable of diagnosing abnormal states within examples of said artifact. -
Cognitive Model of Brain-Machine Integration
Zhongzhi Shi, Zeqin HuangAbstractBrain-machine integration is a new intelligent technology and system, which is a combination of natural intelligence and artificial intelligence. In order to make this integration effective and co-adaptive biological brain and machine should work collaboratively. A cognitive model of brain-machine integration will be proposed. Environment awareness and collaboration approaches will be explored in the paper. -
Exploration Strategies for Homeostatic Agents
Patrick Andersson, Anton Strandman, Claes StrannegårdAbstractThis paper evaluates two new strategies for investigating artificial animals called animats. Animats are homeostatic agents with the objective of keeping their internal variables as close to optimal as possible. Steps towards the optimal are rewarded and steps away punished. Using reinforcement learning for exploration and decision making, the animats can consider predetermined optimal/acceptable levels in light of current levels, giving them greater flexibility for exploration and better survival chances. This paper considers the resulting strategies as evaluated in a range of environments, showing them to outperform common reinforcement learning, where internal variables are not taken into consideration. -
Lifelong Learning Starting from Zero
Claes Strannegård, Herman Carlström, Niklas Engsner, Fredrik Mäkeläinen, Filip Slottner Seholm, Morteza Haghir ChehreghaniAbstractWe present a deep neural-network model for lifelong learning inspired by several forms of neuroplasticity. The neural network develops continuously in response to signals from the environment. In the beginning the network is a blank slate with no nodes at all. It develops according to four rules: (i) expansion, which adds new nodes to memorize new input combinations; (ii) generalization, which adds new nodes that generalize from existing ones; (iii) forgetting, which removes nodes that are of relatively little use; and (iv) backpropagation, which fine-tunes the network parameters. We analyze the model from the perspective of accuracy, energy efficiency, and versatility and compare it to other network models, finding better performance in several cases. -
Cumulative Learning
Kristinn R. Thórisson, Jordi Bieger, Xiang Li, Pei WangAbstractAn important feature of human learning is the ability to continuously accept new information and unify it with existing knowledge, a process that proceeds largely automatically and without catastrophic side-effects. A generally intelligent machine (AGI) should be able to learn a wide range of tasks in a variety of environments. Knowledge acquisition in partially-known and dynamic task-environments cannot happen all-at-once, and AGI-aspiring systems must thus be capable of cumulative learning: efficiently making use of existing knowledge while learning new things, increasing the scope of ability and knowledge incrementally—without catastrophic forgetting or damaging existing skills. Many aspects of such learning have been addressed in artificial intelligence (AI) research, but relatively few examples of cumulative learning have been demonstrated to date and no generally accepted explicit definition exists of this category of learning. Here we provide a general definition of cumulative learning and describe how it relates to other concepts frequently used in the AI literature. -
Learning with Per-Sample Side Information
Roman Visotsky, Yuval Atzmon, Gal ChechikAbstractLearning from few samples is a major challenge for parameter-rich models such as deep networks. In contrast, people can learn complex new concepts even from very few examples, suggesting that the sample complexity of learning can often be reduced. We describe an approach to reduce the number of samples needed for learning using per-sample side information. Specifically, we show how to speed up learning by providing textual information about feature relevance, like the presence of objects in a scene or attributes in an image. We also give an improved generalization error bound for this case. We formulate the learning problem using an ellipsoid-margin loss, and develop an algorithm that minimizes this loss effectively. Empirical evaluation on two machine vision benchmarks for scene classification and fine-grain bird classification demonstrate the benefits of this approach for few-shot learning. -
Backmatter
- Title
- Artificial General Intelligence
- Editors
-
Patrick Hammer
Pulin Agrawal
Dr. Ben Goertzel
Matthew Iklé
- Copyright Year
- 2019
- Publisher
- Springer International Publishing
- Electronic ISBN
- 978-3-030-27005-6
- Print ISBN
- 978-3-030-27004-9
- DOI
- https://doi.org/10.1007/978-3-030-27005-6
Accessibility information for this book is coming soon. We're working to make it available as quickly as possible. Thank you for your patience.