Skip to main content

2018 | Buch

Computational Intelligence in Music, Sound, Art and Design

7th International Conference, EvoMUSART 2018, Parma, Italy, April 4-6, 2018, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 7th International Conference on Evolutionary Computation in Combinatorial Optimization, EvoMUSART 2018, held in Parma, Italy, in April 2018, co-located with the Evo*2017 events EuroGP, EvoCOP and EvoApplications.

The 21 revised full papers presented were carefully reviewed and selected from 33 submissions. The papers cover a wide range of topics and application areas, including: generative approaches to music and visual art; medical art therapy; visualization in virtual reality; jewellery design; interactive evolutionary computation; and the art theory of evolutionary computation.

Inhaltsverzeichnis

Frontmatter
Visual Art Inspired by the Collective Feeding Behavior of Sand-Bubbler Crabs
Abstract
Sand-bubblers are crabs of the genera Dotilla and Scopimera which are known to produce remarkable patterns and structures at tropical beaches. From these pattern-making abilities, we may draw inspiration for digital visual art. A simple mathematical model is proposed and an algorithm is designed that may create such sand-bubbler patterns artificially. In addition, design parameters to modify the patterns are identified and analyzed by computational aesthetic measures. Finally, an extension of the algorithm is discussed that may enable controlling and guiding generative evolution of the art-making process.
Hendrik Richter
Dynamical Music with Musical Boolean Networks
Abstract
An extended Boolean network model is investigated as a possible medium in which a human composer can write music. A Boolean network is a simple discrete-time dynamical system whose state is characterised by the states of its constituent Boolean-valued vertices. The evolution of the system is predetermined by an initial state and the properties of the activation functions associated with each vertex. By associating musical events with the states of the system, its trajectory from a particular start state can be interpreted as a piece of tonal music. The primary source of interest in composing music using a deterministic dynamical system is the dependence of the musical result on the initial conditions. This paper explores the possibility of producing musically interesting variations on a given melodic phrase by changing the initial conditions from which the generating dynamical system is started.
George Gabriel, Susan Stepney
Non-photorealistic Rendering with Cartesian Genetic Programming Using Graphics Processing Units
Abstract
A non-photorealistic rendering system implemented with Cartesian genetic programming (CGP) is discussed. The system is based on Baniasadi’s NPR system using tree-based GP. The CGP implementation uses a more economical representation of rendering expressions compared to the tree-based system. The system borrows their many-objective fitness evaluation scheme, which uses a model of aesthetics, colour testing, and image matching. GPU acceleration of the paint stroke application results in up to 6 times faster rendering times compared to CPU-based renderings. The convergence dynamics of CGP’s \(\mu +\lambda \) evolutionary strategy was more unstable than conventional GP runs with large populations. One possible reason may be the sensitivity of the smaller \(\mu +\lambda \) population to the many-objective ranking scheme, especially when objectives are in conflict with each other. This instability is arguably an advantage as an exploratory tool, especially when considering the subjectivity inherent in evolutionary art.
Illya Bakurov, Brian J. Ross
Construction of a Repertoire of Analog Form-Finding Techniques as a Basis for Computational Morphological Exploration in Design and Architecture
Abstract
The article describes the process of constructing a repertoire of analog form-finding techniques, which can be used in evolutionary computation to (i) compare the techniques among them and select the most suitable for a project, (ii) to explore forms or shapes in an analog and/or manual way, (iii) as a basis for the development of algorithms in specialized software, (iv) or to understand the physical processes and mathematical procedures of the techniques. To our knowledge no one has built a repertoire of this nature, since all the techniques are in sources of diverse disciplines. Methodologically, the construction process was based on a systematic review of the literature, allowing us to identify 33 techniques where the principles of bio-inspiration and self-organization are evident, characteristics both of form-finding strategies. As a result, we present the repertoire structure, composed of five groups of techniques sharing similar physical processes: inflate, group, de-construct, stress, solidify and fold. Subsequently, the repertoire’s conceptual, mathematical, and graphical analysis categories are presented. Finally, conclusions of potential applications and research trends of the subject are presented.
Ever Patiño, Jorge Maya
Medical Art Therapy of the Future: Building an Interactive Virtual Underwater World in a Children’s Hospital
Abstract
We are developing an interactive virtual underwater world with the aim to reduce stress and boredom in hospitalised children, to improve their quality of life, by employing an evidence-based design process and by using techniques from Artificial Life and Human-Computer Interaction. A 3D motion sensing camera tracks the activity of children in front of a wall projection. As they wave their hands, colorful sea creatures paddle closer to say hi and interact with the children.
Ludivine Lechat, Lieven Menschaert, Tom De Smedt, Lucas Nijs, Monica Dhar, Koen Norga, Jaan Toelen
Expressive Piano Music Playing Using a Kalman Filter
Abstract
In this paper, we present an algorithm that uses the Kalman filter to combine simple phrase structure models with observed differences in pitch within the phrase to refine the phrase model and hence adjust the loudness level and tempo of qualities of the melody line. We show how similar adjustments may be made to the accompaniment to introduce expressive attributes to a midi file representation of a score. In the paper, we show that the subjects had some difficulty in distinguishing between the resulting expressive renderings and human performances of the same score.
Alexandra Bonnici, Maria Mifsud, Kenneth P. Camilleri
Generative Solid Modelling Employing Natural Language Understanding and 3D Data
Abstract
The paper describes an experimental system for generating 3D-printable models inspired by arbitrary textual input. Utilizing a transliteration pipeline, the system pivots on Natural Language Understanding technologies and 3D data available via online repositories to result in a bag of retrieved 3D models that are then concatenated in order to produce original designs. Such artefacts celebrate a post-digital kind of objecthood, as they are concretely physical while, at the same time, incorporate the cybernetic encodings of their own making. Twelve individuals were asked to reflect on some of the 3D-printed, physical artefacts. Their responses suggest that the created artefacts succeed in triggering imagination, and in accelerating moods and narratives of various sorts.
Marinos Koutsomichalis, Björn Gambäck
evoExplore: Multiscale Visualization of Evolutionary Histories in Virtual Reality
Abstract
evoExplore is a system built for virtual reality (VR) and designed to assist evolutionary design projects. Built with the Unity 3D game engine and designed with future development and expansion in mind, evoExplore allows the user to review and visualize data collected from evolutionary design experiments. Expanding upon existing work, evoExplore provides the tools needed to breed your own evolving populations of designs, save the results from such evolutionary experiments and then visualize the recorded data as an interactive VR experience. evoExplore allows the user to dynamically explore their own evolutionary experiments, as well as those produced by other users. In this document we describe the features of evoExplore, its use of virtual reality and how it supports future development and expansion.
Justin Kelly, Christian Jacob
Musical Organisms
A Generative Approach to Growing Musical Scores
Abstract
In this paper, we describe the creation of Musical Organisms using a novel generative music composition approach modeled on biological evolutionary and developmental (Evo Devo) processes. We are interested in using the Evo Devo processes that generate biological organisms with diverse and interesting structures—structures that exhibit modularity, repetition, and hierarchy—in order to create diverse music compositions that exhibit these same structural properties. The current focus of our work has been on Musical Organism development. Our Musical Organisms are musical scores that develop from a single musical note, just as a biological organism develops from a single cell. We describe the musical genome and the non-linear dynamical processes that drive the development of the Musical Organism from single note to complex musical score. While the evolution of Musical Organisms has not been our central focus, we describe how evolution can act upon genomic variation within populations of Musical Organisms to create new Musical Organism species with diverse and complex structures. And we introduce the concept of genome embedding as a unique method for generating genetic variation in a population, and developing structural modularity in Musical Organisms.
Anna Lindemann, Eric Lindemann
Generating Drum Rhythms Through Data-Driven Conceptual Blending of Features and Genetic Algorithms
Abstract
Conceptual blending allows the emergence of new conceptual spaces by blending two input spaces. Using conceptual blending for inventing new concepts has been proven a promising technique for computational creativity. Especially in music, recent work has shown that proper representations of the input spaces allows the generation of consistent and sometimes surprising blends. The paper at hand proposes a novel approach to conceptual blending through the combination of higher-level features extracted from data; the field of application is drum rhythms. Through this methodology, the input rhythms are represented by 32 extracted features. After their generic space of similar features is computed, a simple amalgam-based methodology creates a blended set of an as equally as possible divided number of the most salient features from each input. This blended set of features acts as the target vector for a Genetic Algorithm that outputs the rhythm that best captures the blended features; this rhythm is called the blended rhythm. The salience of each feature in each rhythm in the database of input rhythms is computed from data and reflects the uniqueness of features. Preliminary results shed some light on how feature blending works on the generation of drum rhythms and new possible research directions for data-driven feature blending are proposed.
Maximos Kaliakatsos-Papakostas
RoboJam: A Musical Mixture Density Network for Collaborative Touchscreen Interaction
Abstract
RoboJam is a machine-learning system for generating music that assists users of a touchscreen music app by performing responses to their short improvisations. This system uses a recurrent artificial neural network to generate sequences of touchscreen interactions and absolute timings, rather than high-level musical notes. To accomplish this, RoboJam’s network uses a mixture density layer to predict appropriate touch interaction locations in space and time. In this paper, we describe the design and implementation of RoboJam’s network and how it has been integrated into a touchscreen music app. A preliminary evaluation analyses the system in terms of training, musical generation and user interaction.
Charles Patrick Martin, Jim Torresen
Towards a General Framework for Artistic Style Transfer
Abstract
In recent times, artificial intelligence has become more sophisticated when it comes to the creation of fine arts. Especially in the area of painting, artificial methods reached a new level of maturity in the process of replicating perceptual quality. These systems are able to separate style and content of given images, enabling them to recombine and mutate the facets to create novel content. This work defines a general framework for conducting artistic style transfer. This allows recombination and structured modification of state of the art algorithms for further investigation and profiling of artistic style transfer.
Florian Uhde, Sanaz Mostaghim
Adaptive Interface for Mapping Body Movements to Sounds
Abstract
Contemporary digital musical instruments allow an abundance of means to generate sound. Although superior to traditional instruments in terms of producing a unique audio-visual act, there is still an unmet need for digital instruments that allow performers to generate sounds through movements in an intuitive manner. One of the key factors for an authentic digital music act is a low latency between movements (user commands) and corresponding sounds. Here we present such a low-latency interface that maps the user’s kinematic actions into sound samples. The interface relies on wireless sensor nodes equipped with inertial measurement units and a real-time algorithm dedicated to the early detection and classification of a variety of movements/gestures performed by a user. The core algorithm is based on the approximate inference of a hierarchical generative model with piecewise-linear dynamical components. Importantly, the model’s structure is derived from a set of motion gestures. The performance of the Bayesian algorithm was compared against the k-nearest neighbors (k-NN) algorithm, which showed the highest classification accuracy, in a pre-testing phase, among several existing state-of-the-art algorithms. In almost all of the evaluation metrics the proposed probabilistic algorithm outperformed the k-NN algorithm.
Dimitrije Marković, Nebojša Malešević
On Collaborator Selection in Creative Agent Societies: An Evolutionary Art Case Study
Abstract
We study how artistically creative agents may learn to select favorable collaboration partners. We consider a society of creative agents with varying skills and aesthetic preferences able to interact with each other by exchanging artifacts or through collaboration. The agents exhibit interaction awareness by modeling their peers and make decisions about collaboration based on the learned peer models. To test the peer models, we devise an experimental collaboration process for evolutionary art, where two agents create an artifact by evolving the same artifact set in turns. In an empirical evaluation, we focus on how effective peer models are in selecting collaboration partners and compare the results to a baseline where agents select collaboration partners randomly. We observe that peer models guide the agents to more beneficial collaborations.
Simo Linkola, Otto Hantula
Towards Partially Automatic Search of Edge Bundling Parameters
Abstract
Edge bundling methods are used in flow maps and graphs to reduce the visual clutter, which is generated when representing complex and heterogeneous data. Nowadays, there are many edge bundling algorithms that have been successfully applied to a wide range of problems in graph representation. However, the majority of these methods are still difficult to use and apply on real world problems by the experts from other areas. This is due to the complexity of the algorithms and concepts behind them, as well as a strong dependence on their parametrization. In addition, the majority of edge bundling methods need to be fine-tuned when applied on different datasets. This paper presents a new approach that helps finding near-optimal parameters for solving such issues in edge bundling algorithms, regardless of the configuration of the input graph. Our method is based on evolutionary computation, allowing the users to find edge bundling solutions for their needs. In order to understand the effectiveness of the evolutionary algorithm in such kind of tasks, we performed experiments with automatic fitness functions, as well as with partially user-guided evolution. We tested our approach in the optimization of the parameters of two different edge bundling algorithms. Results are compared using objective criteria and a critical discussion of the obtained graphical solutions.
Evgheni Polisciuc, Filipe Assunção, Penousal Machado
Co-evolving Melodies and Harmonization in Evolutionary Music Composition
Abstract
The paper describes a novel multi-objective evolutionary algorithm implementation that generates short musical ideas consisting of a melody and abstract harmonization, developed in tandem. The system is capable of creating these ideas based on provided material or autonomously. Three automated fitness features were adapted to the model to evaluate the generated music during evolution, and a fourth was developed to ensure harmonic progression. Four rhythmical pattern matching features were also developed. 21 pieces of music, produced by the system under various configurations, were evaluated in a user study. The results indicate that the system is capable of composing musical ideas that are subjectively interesting and pleasant, but not consistently.
Olav Olseng, Björn Gambäck
Learning as Performance: Autoencoding and Generating Dance Movements in Real Time
Abstract
This paper describes the technology behind a performance where human dancers interact with an “artificial” performer projected on a screen. The system learns movement patterns from the human dancers in real time. It can also generate novel movement sequences that go beyond what it has been taught, thereby serving as a source of inspiration for the human dancers, challenging their habits and normal boundaries and enabling a mutual exchange of movement ideas. It is central to the performance concept that the system’s learning process is perceivable for the audience. To this end, an autoencoder neural network is trained in real time with motion data captured live on stage. As training proceeds, a “pose map” emerges that the system explores in a kind of improvisational state. The paper shows how this method is applied in the performance, and shares observations and lessons made in the process.
Alexander Berman, Valencia James
Deep Interactive Evolution
Abstract
This paper describes an approach that combines generative adversarial networks (GANs) with interactive evolutionary computation (IEC). While GANs can be trained to produce lifelike images, they are normally sampled randomly from the learned distribution, providing limited control over the resulting output. On the other hand, interactive evolution has shown promise in creating various artifacts such as images, music and 3D objects, but traditionally relies on a hand-designed evolvable representation of the target domain. The main insight in this paper is that a GAN trained on a specific target domain can act as a compact and robust genotype-to-phenotype mapping (i.e. most produced phenotypes do resemble valid domain artifacts). Once such a GAN is trained, the latent vector given as input to the GAN’s generator network can be put under evolutionary control, allowing controllable and high-quality image generation. In this paper, we demonstrate the advantage of this novel approach through a user study in which participants were able to evolve images that strongly resemble specific target images.
Philip Bontrager, Wending Lin, Julian Togelius, Sebastian Risi
The Light Show: Flashing Fireflies Gathering and Flying over Digital Images
Abstract
Computational Generative Art has been inspired by complex collective tasks made by social insects like the ants, which are able to coordinate through local interactions and simple stochastic behavior. In this paper we present the Light Show, an application of the mechanism of flash synchronization exhibited by some species of fireflies. The virtual fireflies from The Light Show gather and fly over digital readymades, self-choreographing the rhythm of illumination of their artistic habitats. We present a standard model with some design parameters able to control synchronization and also a variation able to exhibit clusters of sync at different phases that grow, fight, disappear or win, illuminating different parts of a digital image in an animated process.
Paulo Urbano
Evotype: Towards the Evolution of Type Stencils
Abstract
Typefaces are an essential resource employed by graphic designers. The increasing demand for innovative type design work increases the need for good technological means to assist the designer in the creation of a typeface. We present an evolutionary computation approach for the generation of type stencils to draw coherent glyphs for different characters. The proposed system employs a Genetic Algorithm to evolve populations of type stencils. The evaluation of each candidate stencil uses a hill climbing algorithm to search the best configurations to draw the target glyphs. We study the interplay between legibility, coherence and expressiveness, and show how our framework can be used in practice.
Tiago Martins, João Correia, Ernesto Costa, Penousal Machado
Backmatter
Metadaten
Titel
Computational Intelligence in Music, Sound, Art and Design
herausgegeben von
Antonios Liapis
Juan Jesús Romero Cardalda
Anikó Ekárt
Copyright-Jahr
2018
Electronic ISBN
978-3-319-77583-8
Print ISBN
978-3-319-77582-1
DOI
https://doi.org/10.1007/978-3-319-77583-8