Skip to main content
Top

2011 | Book

Computers and Games

7th International Conference, CG 2010, Kanazawa, Japan, September 24-26, 2010, Revised Selected Papers

Editors: H. Jaap van den Herik, Hiroyuki Iida, Aske Plaat

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 7th International Conference on Computers and Games, CG 2010, held in Kanazawa, Japan, in September 2010. The 24 papers presented were carefully reviewed and selected for inclusion in this book. They cover a wide range of topics such as monte-carlo tree search, proof-number search, UCT algorithm, scalability, parallelization, opening books, knowledge abstraction, solving games, consultation of players, multi-player games, extraversion, and combinatorial game theory. In addition a wide range of computer games is dealt with, such as Chinese Checkers, Chinese Chess, Connect6, Go, Havannah, Lines of Action, Pckomino, Shogi, Surakarta, and Yahtzee.

Table of Contents

Frontmatter
Solving Hex: Beyond Humans
Abstract
For the first time, automated Hex solvers have surpassed humans in their ability to solve Hex positions: they can now solve many 9×9 Hex openings. We summarize the methods that attained this milestone, and examine the future of Hex solvers.
Broderick Arneson, Ryan B. Hayward, Philip Henderson
Job-Level Proof-Number Search for Connect6
Abstract
This paper proposes a new approach for proof number (PN) search, named job-level PN (JL-PN) search, where each search tree node is evaluated or expanded by a heavy-weight job, which takes normally over tens of seconds. Such JL-PN search is well suited for parallel processing, since these jobs are allowed to be performed by remote processors independently. This paper applies JL-PN search to solving automatically several Connect6 positions including openings on desktop grids. For some of these openings, so far no human expert had been able to find a winning strategy. Our experiments also show that the speedups for solving the test positions are roughly linear, fluctuating from sublinear to superlinear. Hence, JL-PN search appears to be a quite promising approach to solving games.
I-Chen Wu, Hung-Hsuan Lin, Ping-Hung Lin, Der-Johng Sun, Yi-Chih Chan, Bo-Ting Chen
Evaluation-Function Based Proof-Number Search
Abstract
This article introduces Evaluation-Function based Proof–Number Search (EF-PN) and its second-level variant EF-PN2. It is a framework for initializing the proof and disproof number of a leaf node with the help of a heuristic evaluation function. Experiments in LOA and Surakarta show that compared to PN and PN2, which use mobility to initialize the proof and disproof numbers, EF-PN and EF-PN2 take between 45% to 85% less time for solving positions. Based on these results, we may conclude that EF-PN and EF-PN2 reduce the search space considerably.
Mark H. M. Winands, Maarten P. D. Schadd
On the Scalability of Parallel UCT
Abstract
The parallelization of MCTS across multiple-machines has proven surprisingly difficult. The limitations of existing algorithms were evident in the 2009 Computer Olympiad where Zen using a single four-core machine defeated both Fuego with ten eight-core machines, and Mogo with twenty thirty-two core machines. This paper investigates the limits of parallel MCTS in order to understand why distributed parallelism has proven so difficult and to pave the way towards future distributed algorithms with better scaling. We first analyze the single-threaded scaling of Fuego and find that there is an upper bound on the play-quality improvements which can come from additional search. We then analyze the scaling of an idealized N-core shared memory machine to determine the maximum amount of parallelism supported by MCTS. We show that parallel speedup depends critically on how much time is given to each player. We use this relationship to predict parallel scaling for time scales beyond what can be empirically evaluated due to the immense computation required. Our results show that MCTS can scale nearly perfectly to at least 64 threads when combined with virtual loss, but without virtual loss scaling is limited to just eight threads. We also find that for competition time controls scaling to thousands of threads is impossible not necessarily due to MCTS not scaling, but because high levels of parallelism can start to bump up against the upper performance bound of Fuego itself.
Richard B. Segal
Scalability and Parallelization of Monte-Carlo Tree Search
Abstract
Monte-Carlo Tree Search is now a well established algorithm, in games and beyond. We analyze its scalability, and in particular its limitations and the implications in terms of parallelization. We focus on our Go program MoGo and our Havannah program Shakti. We use multicore machines and message-passing machines. For both games and on both type of machines we achieve adequate efficiency for the parallel version. However, in spite of promising results in self-play there are situations for which increasing the time per move does not solve anything. Therefore parallelization is not a solution to all our problems. Nonetheless, for problems where the Monte-Carlo part is less biased than in the game of Go, parallelization should be quite efficient, even without shared memory.
Amine Bourki, Guillaume Chaslot, Matthieu Coulm, Vincent Danjean, Hassen Doghmen, Jean-Baptiste Hoock, Thomas Hérault, Arpad Rimmel, Fabien Teytaud, Olivier Teytaud, Paul Vayssière, Ziqin Yu
Biasing Monte-Carlo Simulations through RAVE Values
Abstract
The Monte-Carlo Tree Search algorithm has been successfully applied in various domains. However, its performance heavily depends on the Monte-Carlo part. In this paper, we propose a generic way of improving the Monte-Carlo simulations by using RAVE values, which already strongly improved the tree part of the algorithm. We prove the generality and efficiency of our approach by showing improvements on two different applications: the game of Havannah and the game of Go.
Arpad Rimmel, Fabien Teytaud, Olivier Teytaud
Computational Experiments with the RAVE Heuristic
Abstract
The Monte-Carlo tree search algorithm Upper Confidence bounds applied to Trees (UCT) has become extremely popular in computer games research. The Rapid Action Value Estimation (RAVE) heuristic is a strong estimator that often improves the performance of UCT-based algorithms. However, there are situations where RAVE misleads the search whereas pure UCT search can find the correct solution. Two games, the simple abstract game Sum of Switches (SOS) and the game of Go, are used to study the behavior of the RAVE heuristic. In SOS, RAVE updates are manipulated to mimic game situations where RAVE misleads the search. Such false RAVE updates are used to create RAVE overestimates and underestimates. A study of the distributions of mean and RAVE values reveals great differences between Go and SOS. While the RAVE-max update rule is able to correct extreme cases of RAVE underestimation, it is not effective in closer to practical settings and in Go.
David Tom, Martin Müller
Monte-Carlo Simulation Balancing in Practice
Abstract
Simulation balancing is a new technique to tune parameters of a playout policy for a Monte-Carlo game-playing program. So far, this algorithm had only been tested in a very artificial setting: it was limited to 5×5 and 6×6 Go, and required a stronger external program that served as a supervisor. In this paper, the effectiveness of simulation balancing is demonstrated in a more realistic setting. A state-of-the-art program, Erica, learned an improved playout policy on the 9×9 board, without requiring any external expert to provide position evaluations. The evaluations were collected by letting the program analyze positions by itself. The previous version of Erica learned pattern weights with the minorization-maximization algorithm. Thanks to simulation balancing, its playing strength was improved from a winning rate of 69% to 78% against Fuego 0.4.
Shih-Chieh Huang, Rémi Coulom, Shun-Shii Lin
Score Bounded Monte-Carlo Tree Search
Abstract
Monte-Carlo Tree Search (MCTS) is a successful algorithm used in many state of the art game engines. We propose to improve a MCTS solver when a game has more than two outcomes. It is for example the case in games that can end in draw positions. In this case it improves significantly a MCTS solver to take into account bounds on the possible scores of a node in order to select the nodes to explore. We apply our algorithm to solving Seki in the game of Go and to Connect Four.
Tristan Cazenave, Abdallah Saffidine
Improving Monte–Carlo Tree Search in Havannah
Abstract
Havannah is a game played on an hexagonal board of hexagons where the base of the board typically ranges from four to ten hexagons. The game is known to be difficult to program. We study an MCTS-based approach to programming Havannah using our program named Wanderer. We experiment with five techniques of the basic MCTS algorithms and demonstrate that at normal time controls of approximately 30 seconds per move Wanderer can make quite strong moves with bases of size four or five, and play a reasonable game with bases of size six or seven. At longer time controls (ten minutes per move) Wanderer (1) appears to play nearly perfectly with base four, (2) is difficult for most humans to beat at base five, and (3) gives a good game at bases six and seven. Future research focuses on larger board sizes.
Richard J. Lorentz
Node-Expansion Operators for the UCT Algorithm
Abstract
Recent works on the MCTS and UCT framework in the domain of Go focused on introducing knowledge to the playout and on pruning variations from the tree, but so far node expansion has not been investigated. In this paper we show that delaying expansion according to the number of the siblings delivers a gain of more than 92% when compared to normal expansion. We propose three improvements; one that uses domain knowledge and two that are domain-independent methods. Experimental results show that all advanced operators significantly improve the UCT performance when compared to the basic delaying expansion. From the results we may conclude that the new expansion operators are an appropriate means to improve the UCT algorithm.
Takayuki Yajima, Tsuyoshi Hashimoto, Toshiki Matsui, Junichi Hashimoto, Kristian Spoerer
Monte-Carlo Opening Books for Amazons
Abstract
Automatically creating opening books is a natural step towards the building of strong game-playing programs, especially when there is little available knowledge about the game. However, while recent popular Monte-Carlo Tree-Search programs showed strong results for various games, we show here that programs based on such methods cannot efficiently use opening books created using algorithms based on minimax. To overcome this issue, we propose to use an MCTS-based technique, Meta-MCTS, to create such opening books. This method, while requiring some tuning to arrive at the best opening book possible, shows promising results to create an opening book for the game of the Amazons, even if this is at the cost of removing its Monte-Carlo part.
Julien Kloetzer
A Principled Method for Exploiting Opening Books
Abstract
In the past we used a great deal of computational power and human expertise for storing a rather big dataset of good 9x9 Go games, in order to build an opening book. We improved the algorithm used for generating and storing these games considerably. However, the results were not very robust, as (i) opening books are definitely not transitive, making the non-regression testing extremely difficult, (ii) different time settings lead to opposite conclusions, because a good opening for a game with 10s per move on a single core is quite different from a good opening for a game with 30s per move on a 32-cores machine, and (iii) some very bad moves sometimes still occur. In this paper, we formalize the optimization of an opening book as a matrix game, compute the Nash equilibrium, and conclude that a naturally randomized opening book provides optimal performance (in the sense of Nash equilibria). Moreover, our research showed that from a finite set of opening books, we can choose a distribution on these opening books so that the resultant randomly constructed opening book has a significantly better performance than each of the deterministic opening books.
Romaric Gaudel, Jean-Baptiste Hoock, Julien Pérez, Nataliya Sokolovska, Olivier Teytaud
A Human-Computer Team Experiment for 9x9 Go
Abstract
Monte-Carlo Tree Search has given computer Go a significant boost in strength in the past few years, but progress seems to have slowed, and once again we have to ask ourselves how can computers make effective use of the ever-increasing computer power. In 2002, we started a human-computer team experiment with very long thinking times and no restrictions on the procedure, to see how strong such a team could be. We will introduce our experimental method and show the results so far.
Darren Cook
Consultation Algorithm for Computer Shogi: Move Decisions by Majority
Abstract
A new algorithm that runs on a computer with interconnected processors has been designed for Shogi. The algorithm adopts consultation between many individual players. A method that can create multiple players from one program is presented. Applying a simple rule to select a decision on a single move, the consultation algorithm improves the performance of computer Shogi engines. It is also demonstrated that a council system consisting of three well-known Shogi programs: YSS, GPS, and Bonanza plays better games than any of the three programs individually.
Takuya Obata, Takuya Sugiyama, Kunihito Hoki, Takeshi Ito
Optimistic Selection Rule Better Than Majority Voting System
Abstract
A recently proposed ensemble approach to game-tree search has attracted a great deal of attention. The ensemble system consists of M computer players, where each player uses a different series of pseudo-random numbers. A combination of multiple players under the majority voting system would improve the performance of a Shogi-playing computer. We present a new strategy of move selection based on the search values of a number of players. The move decision is made by selecting one player from all M players. Each move is selected by referring to the evaluation value of the tree search of each player. The performance and mechanism of the strategy are examined. We show that the optimistic selection rule, which selects the player that yields the highest evaluation value, outperforms the majority voting system. By grouping 16 or more computer players straightforwardly, the winning rates of the strongest Shogi programs increase from 50 to 60% or even higher.
Takuya Sugiyama, Takuya Obata, Kunihito Hoki, Takeshi Ito
Knowledge Abstraction in Chinese Chess Endgame Databases
Abstract
Retrograde analysis is a well known approach to construct endgame databases. However, the size of the endgame databases are too large to be loaded into the main memory of a computer during tournaments. In this paper, a novel knowledge abstraction strategy is proposed to compress endgame databases. The goal is to obtain succinct knowledge for practical endgames. A specialized goal-oriented search method is described and applied on the important endgame KRKNMM. The method of combining a search algorithm with a small size of knowledge is used to handle endgame positions up to a limited depth, but with a high degree of correctness.
Bo-Nian Chen, Pangfeng Liu, Shun-Chin Hsu, Tsan-sheng Hsu
Rook Jumping Maze Design Considerations
Abstract
We define the Rook Jumping Maze, provide historical perspective, and describe a generation method for such mazes. When applying stochastic local search algorithms to maze design, most creative effort concerns the definition of an objective function that rates maze quality. We define and discuss several maze features to consider in such a function definition. Finally, we share our preferred design choices, make design process observations, and note the applicability of these techniques to variations of the Rook Jumping Maze.
Todd W. Neller, Adrian Fisher, Munyaradzi T. Choga, Samir M. Lalvani, Kyle D. McCarty
A Markovian Process Modeling for Pickomino
Abstract
This paper deals with a nondeterministic game based on die rolls and on the ”stop or continue” principle: Pickomino. During his turn, each participant has to make the best decisions first to choose the dice to keep, then to choose between continuing or stopping depending on the previous rolls and on the available resources. Markov Decision Processes (MDPs) offer the formal framework to model this game. The two main problems are first to determine the set of states, then to compute the transition probabilities.
We provide in this paper original solutions to both problems: we provide (1) a compact representation of states and (2) a constructive method to compute the probability distributions, based on the partitioning of the space of roll results depending on a set of marked values. Finally, we show the efficiency of the proposed method through numerous experimental results: it turns out to be impressive compared to previous programs we developed.
Stéphane Cardon, Nathalie Chetcuti-Sperandio, Fabien Delorme, Sylvain Lagrue
New Solutions for Synchronized Domineering
Abstract
Cincotti and Iida invented the game of Synchronized Domineering, and analyzed a few special cases. We develop a more general technique of analysis, and obtain results for many more special cases. We obtain complete results for board sizes 3 ×n, 5 ×n, 7 ×n, and 9 ×n (for n large enough) and partial results for board sizes 2×n, 4 ×n, and 6 ×n.
Sahil Bahri, Clyde P. Kruskal
The Lattice Structure of Three-Player Games
Abstract
In combinatorial games, few results are known about the overall structure of three-player games. We prove that three-player games born by day d form a distributive lattice with respect to every partial order relation, but that the collection of all finite three-player games does not form a lattice.
Alessandro Cincotti
Enhancements for Multi-Player Monte-Carlo Tree Search
Abstract
Monte-Carlo Tree Search (MCTS) is becoming increasingly popular for playing multi-player games. In this paper we propose two enhancements for MCTS in multi-player games: (1) Progressive History and (2) Multi-Player Monte-Carlo Tree Search Solver (MP-MCTS-Solver). We analyze the performance of these enhancements in two different multi-player games: Focus and Chinese Checkers. Based on the experimental results we conclude that Progressive History is a considerable improvement in both games and MP-MCTS-Solver, using the standard update rule, is a genuine improvement in Focus.
J. (Pim) A. M. Nijssen, Mark H. M. Winands
Nearly Optimal Computer Play in Multi-player Yahtzee
Abstract
Yahtzee is the most popular commercial dice game in the world. It can be played either by one or many players. In case of the single-player version, optimal computer strategies both for maximizing the expected average score and for maximizing the probability of beating a particular score are already known. However, when it comes to the multi-player version, those approaches are far too resource intensive and thus are not able to develop an optimal strategy given the current hardware.
This paper presents the first in-depth analysis of the multi-player version of Yahtzee. Our proposed implementation of an optimal strategy for the single-player version significantly speeds up the calculations. Resources necessary to memorize the optimal strategy for a two-player game are precisely estimated. It is shown that developing an optimal strategy for more players is not possible with the use of the current technology. For this case, a heuristic strategy is suggested. By means of experiments created especially for this purpose, it is proven that in practice this strategy is indistinguishable from the optimal one.
An experimental analysis of the actual advantage of the optimal strategy over suboptimal opponents like humans has also been conducted. Results show that Yahtzee is “highly” a game of chance and advantage of the optimal strategy is insignificant.
Jakub Pawlewicz
Extraversion in Games
Abstract
The behavior of a human player in a game expresses the personality of that player. Personality is an important characteristic for modeling the player’s profile. In our research we use the five factor model of personality, in which extraversion is a notable factor. Extraversion is the human tendency of being sensitive to rewards. This often results in humans seeking socially rewarding situations. Extraversion plays a prominent part in the in-game behavior of a player. The in-game behavior can be decomposed in 20 different in-game elements.
In this paper, we investigate which in-game elements influence the in-game behavior when looking at extraversion only. To answer this question we performed two experiments. The outcome is indicative. Variation in behavior caused by extraversion is seen in 12 of the 20 elements that spanned the 20-dimensional space. Future research will focus on: (1) in-game behavior correlated to the other factors and (2) under what conditions more elements can be added to the characterization of extraversion.
Giel van Lankveld, Sonny Schreurs, Pieter Spronck, Jaap van den Herik
Backmatter
Metadata
Title
Computers and Games
Editors
H. Jaap van den Herik
Hiroyuki Iida
Aske Plaat
Copyright Year
2011
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-17928-0
Print ISBN
978-3-642-17927-3
DOI
https://doi.org/10.1007/978-3-642-17928-0

Premium Partner