Skip to main content
Top

2008 | Book

New Frontiers in Artificial Intelligence

JSAI 2007 Conference and Workshops, Miyazaki, Japan, June 18-22, 2007, Revised Selected Papers

Editors: Ken Satoh, Akihiro Inokuchi, Katashi Nagao, Takahiro Kawamura

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

The technology of arti?cial intelligence is increasing its importance thanks to the rapid growth of the Internet and computer technology. In Japan, the annual conference series of JSAI (The Japanese Society for Arti?cial Intelligence) has been playing a leading role in promoting AI research, and selected papers of the annual conferences have been published in the LNAI series since 2003. This book consists of award papers from the 21st annual conference of JSAI (JSAI 2007) and selected papers from the four co-located workshops. Seven papers were awarded among more than 335 presentations in the conference and 24 papers were selected from a total of 48 presentations in the co-located workshops: Logic and Engineering of Natural Language Semantics 2007 (LENLS 2007), the International Workshop on Risk Informatics (RI 2007), the 5th Wo- shop on Learning with Logics and Logics for Learning (LLLL 2007), and the 1st International Workshop on Juris-informatics (JURISIN 2007). The award papers from JSAI 2007 underwent a rigorous selection process. Firstly, recommendations were made from three people (Session Chair, session commentator and one PC member) in each session, and then recommended - pers were carefully reviewed and voted for by PC members for ?nal selection.

Table of Contents

Frontmatter

Awarded Papers

Frontmatter
Overview of Awarded Papers: The 21st Annual Conference of JSAI

This chapter features seven awarded papers, selected from JSAI 2007 - the 21st annual conference of Japanese Society for Artificial Intelligence. These awarded papers are truly excellent, as they were chosen out of 335 papers, with the selection rate just about two per cent, and the selection involved approximately ninety reviewers. The selection rate was smaller than usual years that used to be three to four per cent.

Katashi Nagao
Modeling Human-Agent Interaction Using Bayesian Network Technique

Task manipulation is direct evidence of understanding, and speakers adjust their utterances that are in progress by monitoring listener’s task manipulation. Aiming at developing animated agents that control multimodal instruction dialogues by monitoring users’ task manipulation, this paper presents a probabilistic model of fine-grained timing dependencies among multimodal communication behaviors. Our preliminary evaluation demonstrated that our model quite accurately judges whether the user understand the agent’s utterances and predicts user’s successful mouse manipulation, suggesting that the model is useful in estimating user’s understanding and can be applied to determining the next action of an agent.

Yukiko Nakano, Kazuyoshi Murata, Mika Enomoto, Yoshiko Arimoto, Yasuhiro Asa, Hirohiko Sagawa
Analysis and Design Methodology for Product-Based Services

Recently, manufacturing companies have been moving into service businesses in addition to providing their own products. However, engineers in manufacturing companies do not find creating new service businesses easy because their work-related skills, understanding of design processes, and organizational skills have been developed and optimized for designing products and not services. To design product-based services more effectively and efficiently, systematic design methodologies suitable for engineers are necessary. We have designed a product-based service design methodology called DFACE-SI. This methodology consists of five steps beginning with the generation of service concepts and ending with the description of service business plans. Characteristic features of DFACE-SI include visualization tools that can help stakeholders identify new opportunities and difficulties of the target product-based service. We also applied DFACE-SI to a pilot case study and illustrated its effectiveness.

Naoshi Uchihira, Yuji Kyoya, Sun K. Kim, Katsuhiro Maeda, Masanori Ozawa, Kosuke Ishii
Consideration of Infants’ Vocal Imitation Through Modeling Speech as Timbre-Based Melody

Infants acquire spoken language through hearing and imitating utterances mainly from their parents [1,2,3] but never imitate their parents’ voices as they are. What in the voices do the infants imitate? Due to poor phonological awareness, it is difficult for them to decode an input utterance into a string of small linguistic units like phonemes [3,4,5,6], so it is also difficult for them to convert the individual units into sounds with their mouths. What then do infants acoustically imitate? Developmental psychology claims that they extract the holistic sound pattern of an input word, called

word Gestalt

[3,4,5], and reproduce it with their mouths. We address the question “What is the acoustic definition of word Gestalt?” [7] It has to be speaker-invariant because infants extract the same word Gestalt for a particular input word irrespective of the person speaking that word to them. Here, we aim to answer the above question by regarding speech as timbre-based melody that focuses on holistic and speaker-invariant contrastive features embedded in an utterance.

Nobuaki Minematsu, Tazuko Nishimura
Metrics for Evaluating the Serendipity of Recommendation Lists

In this paper we propose metrics

unexpectedness

and

unexpectedness

_

r

for measuring the serendipity of recommendation lists produced by recommender systems. Recommender systems have been evaluated in many ways. Although prediction quality is frequently measured by various accuracy metrics, recommender systems must be not only accurate but also useful. A few researchers have argued that the bottom-line measure of the success of a recommender system should be user satisfaction. The basic idea of our metrics is that unexpectedness is the distance between the results produced by the method to be evaluated and those produced by a primitive prediction method. Here,

unexpectedness

is a metric for a whole recommendation list, while

unexpectedness

_

r

is that taking into account the ranking in the list. From the viewpoints of both accuracy and serendipity, we evaluated the results obtained by three prediction methods in experimental studies on television program recommendations.

Tomoko Murakami, Koichiro Mori, Ryohei Orihara
Moving Sound Source Extraction by Time-Variant Beamforming

We have developed a time-variant beamforming method that can extract sound signals from moving sound sources. It is difficult to recognize moving sound sources due to their amplitude and frequency distortions caused by the fact that the sources themselves are moving. Using our proposed method, the amplitude and frequency distortions of moving sound sources are precisely equalized so that these sound signals can be extracted. Numerical experiments showed that using our method improves moving sound source extraction. Extracting such sounds is important for successful natural human-robot interaction in a real environment because a robot has to recognize various types of sounds and sound sources.

Hirofumi Nakajima, Kazuhiro Nakadai, Yuji Hasegawa, Hiroshi Tsujino
Video Scene Retrieval Using Online Video Annotation

In this paper, we propose an efficient method for extracting scene tags from online video annotation (e.g., comments about video scenes). To evaluate this method by applying extracted information to video scene retrieval, we have developed a video scene retrieval system based on scene tags (i.e., tags associated with video scenes). We have also developed a tag selection system that enables online users to select appropriate scene tags from data created automatically from online video annotation. Furthermore, we performed experiments on tag selection and video scene retrieval. We found that scene tags extracted by using our tag selection system had better cost performance than ones created using a conventional client-side video annotation tool.

Tomoki Masuda, Daisuke Yamamoto, Shigeki Ohira, Katashi Nagao
Spatio-temporal Semantic Map for Acquiring and Retargeting Knowledge on Everyday Life Behavior

Ubiquitous sensing technology and statistical modeling technology are making it possible to conduct scientific research on our everyday lives. These technologies enable us to quantitatively observe and record everyday life phenomena and thus acquire reusable knowledge from the large-scale sensory data. This paper proposes a ”Spatio-temporal Semantic (STS) Mapping System,” which is a general framework for modeling human behavior in an everyday life environment. The STS mapping system consists of a wearable sensor for spatially and temporally measuring human behavior in an everyday setting together with Bayesian network modeling software to acquire and retarget the gathered knowledge on human behavior. We consider this STS mapping system from both the theoretical and practical viewpoints. The theoretical framework describes a behavioral model in terms of a random field or as a point process in spatial statistics. The practical aspect of this paper is concerned with a case study in which the proposed system is used to create a new type of playground equipment design that is safer for children, in order to demonstrate the practical effectiveness of the system. In this case study, we studied children’s behavior using a wireless wearable location-electromyography sensor that was developed by the authors, and then a behavioral model was constructed from the measured data. The case study shows that everyday life science can be used to improve product designs by measuring and modeling the way it is used.

Yoshifumi Nishida, Yoichi Motomura, Goro Kawakami, Naoaki Matsumoto, Hiroshi Mizoguchi

Logic and Engineering of Natural Language Semantics

Frontmatter
Overview of Logic and Engineering of Natural Language Semantics (LENLS) 2007

LENLS 2007 was held at World Convention Center Summit (Phoenix Seagaia Resort), Miayzaki, Japan on June 18 and 19, 2007 as one of the international workshops collocated with the 21st Annual Conference of the Japanese Society for Artificial Intelligence (JSAI 2007). This year the workshop was held for the fourth time, with its predecessors taking place annually. Since the time it started in 2004, LENLS has developed both in its scale and in the quality of presentations into an academic meeting unique in Japan and the surrounding area at which participants can exchange innovative ideas about dynamic semantics, its related fields, and the application of the theories.

Kei Yoshimoto
Semantic Heterogeneity in Evidentials

In this paper I will examine some data relating to evidential systems in various languages and existing formal semantic analyses of evidential phenomena, with an eye to determining the extent to which available systems are capable of accounting for the full range of facts. Data will be drawn largely from [1] and from the formal semantic works I’ll discuss.

Elin McCready
Acts of Promising in Dynamified Deontic Logic

In this paper, the logic of acts of commanding

ECL

II

introduced in Yamada (2007b) will be extended in order to model acts of promising together with acts of commanding. Effects of both kinds of acts are captured in terms, not of changes they bring about on propositional attitudes of their addressees, but of changes they bring about on deontic status of relevant action alternatives; they are modeled as deontic updators. This enables us to see how an act of promising performed by an agent and an act of commanding performed by another agent can jointly bring about a conflict of obligations. Complete axiomatization will be presented, and a comparison with Searle’s treatment of acts of promising in his argument for the derivability of “ought” from “is” will be made.

Tomoyuki Yamada
Dynamic Semantics of Quantified Modal Mu-Calculi and Its Applications to Modelling Public Referents, Speaker’s Referents, and Semantic Referents

A generalized QG-semantics of Quantified Modal Logics (QMLs) is proposed by exploiting Goldblatt & Marefs [30] Quantified General Frame semantics of QMLs to solve the Kripke-imcompleteness problem with some QMLs. It is extended by adding formulas of modal mu-calculi to model speaker’s referents and public referents. Furthermore, dynamic semantics of a quantified modal mu-calculus is formalized based on the generalized QG-semantics.

Norihiro Ogata
Inverse Scope as Metalinguistic Quotation in Operational Semantics

We model semantic interpretation

operationally

: constituents interact as their combination in discourse evolves from state to state. The states are recursive data structures and evolve from step to step by context sensitive rewriting. These notions of

context

and

order

let us explain inverse scope quantifiers and their polarity sensitivity as metalinguistic quotation of the wider scope.

Chung-chieh Shan
A Multimodal Type Logical Grammar Analysis of Japanese: Word Order and Quantifier Scope

This paper presents an analysis of the interaction of scrambling and quantifier scope in Japanese, based on multimodal type logical grammar [5,6]. In developing the grammar of the language, we will make use of several modes. In particular, we will exploit the continuation mode as used in [2,7]. The concept deals with computational side effects and the evaluation order. After establishing the analysis of simple quantifier cases, we also discuss some morphologically related phenomena such as focus and split-QP construction [8,9], and show how they can be dealt with in our system.

Rui Otake, Kei Yoshimoto
Coordinating and Subordinating Dependencies

This paper focuses on the differences / similarities between coordinating and (distant) subordinating binding dependencies. We consider data that is suggestive that in natural language these dependencies are implemented with the same underlying mechanism. We look at four (dynamic) systems that capture these dependencies, first with distinct mechanisms, then with single mechanisms.

Alastair Butler
Left-Peripheral and Sentence-Internal Topics in Japanese

In Japanese, information structure is mainly indicated by the use of the topic marker WA. This paper will discuss how the use of this particle carries out the so-called Topic-Comment articulation from the viewpoint of the syntax-semantics interface, especially within a version of categorial grammars. Pragmatic characterizations of the topic marker have quite often been addressed in the literature without clarifying its syntactic properties, while the syntactic analysis of topicalization has been provided just in terms of hierarchical structure and movement in Japanese theoretical linguistics. We attend to a syntactic function of the topic marker, as suggested by the term

kakari-josi

or concord/coherence particle, which requires an expression marked with the particle WA to show a kind of ’concord’ with the sentence-final verb. Semantically, we argue that the topic particle induces information packaging as suggested by Vallduví (1992), yielding tripartite information structures in the sense of Hajičová, Partee & Sgall (1998). We also deal with sentences with sentence-internal topics, cleft-constructions and multiple topic sentences in terms of incremental processing within the categorial proof net approach.

Hiroaki Nakamura
Incremental Processing and Design of a Parser for Japanese: A Dynamic Approach

This paper illustrates a parser which processes Japanese sentences in an incremantal fashion based on the Dynamic Syntax framework. In Dynamic Syntax there has basically been no algorithm which optimizes the application of transition rules: as it is, the rules can apply to a current parsing state in an arbitrary way. This paper proposes both partitioned parsing states allowing easier access to some kind of unfixed nodes and an algorithm to apply transition rules for Japanese. The parser proposed in this paper is implemented in Prolog. The parser is able to process not only simple sentences but also relative clause constructions, scrambled sentences and complex (embedded) sentences.

Masahiro Kobayashi
Breaking Quotations

Quotation exhibits characteristics of both use and mention. I argue against the recently popular pragmatic reductions of quotation to mere language use (Recanati 1), and in favor of a truly hybrid account synthesizing and extending Potts (2) and Geurts & Maier (3), using a mention logic and a dynamic semantics with presupposition to establish a context-driven meaning shift. The current paper explores a “quote-breaking” extension to solve the problems posed by non-constituent quotation, and anaphora, ellipsis and quantifier raising across quotation marks.

Emar Maier
A Modifier Hypothesis on the Japanese Indeterminate Quantifier Phrase

Shimoyama’s [20, 21] analysis of the Japanese indeterminate quantifier construction (e.g. dono gakusei-mo odotta. ‘Every student danced’) is superior to previous analyses in the sense that it closely adheres to the Principle of Compositionality. However, under this analysis the mo-phrase of the form [[...wh...]

NP

-mo], is analyzed as a generalized quantifier of semantic type < < 

e

,

t

 > ,

t

 >, and this gives rise to a type-theoretical problem. In order to overcome the difficulties, this paper proposes an alternative analysis in which the mo-phrase is treated as a modifier of semantic type < < 

e

,

t

 > , < 

e

,

t

 > >. It also discusses sentences in which mo combines with a PP and an IP and argues that here the modifier hypothesis of the mo-phrase can still be maintained.

Mana Kobuchi-Philip
A Presuppositional Analysis of Definite Descriptions in Proof Theory

In this paper we propose a proof-theoretic analysis of presuppositions in natural language, focusing on the interpretation of definite descriptions. Our proposal is based on the natural deduction system of

ε

-calculus introduced in Carlström [2] and on constructive type theory [11,12]. Based on the idea in [2], we use the

ε

-calculus as an intermediate language in the translation process from natural language into constructive type theory. Using this framework, we formulate the process of presupposition resolution as the process of searching for a derivation in a natural deduction system. In particular, we show how to treat presupposition projection and accommodation within our proof-theoretic framework.

Koji Mineshima
Meaning Games

Communication can be accounted for in game-theoretic terms. The

meaning game

is proposed to formalize intentional communication in which the sender sends a message and the receiver attempts to infer its intended meaning. Using large Japanese and English corpora, the present paper demonstrates that

centering theory

is derived from a meaning game. This suggests that there are no language-specific rules on referential coherence. More generally speaking, language use seems to employ Pareto-optimal ESSs (evolutionarily stable strategies) of potentially very complex meaning games. There is still much to do before this complexity is elucidated in scientific terms, but game theory provides statistical and analytic means by which to advance the study on semantics and pragmatics of natural languages and other communication modalities.

Kôiti Hasida, Shun Shiramatsu, Kazunori Komatani, Tetsuya Ogata, Hiroshi G. Okuno

Risk Informatics

Frontmatter
International Workshop on Risk Informatics (RI2007)
- Approach from Data Mining and Statistics -

Along the enhancement of our social life level, people became to pay more attention to the risk of our society to ensure our life very safe. Under this increasing demand, modern science and engineering now have to provide efficient measures to reduce our social risk in various aspects. On the other hand, the accumulation of a large amount of data on our activities is going on under the introduction of information technology to our society. This data can be used to efficiently manage the risks in the society. The Workshop on Risk Mining 2006 (RM2006) was held in June, 2006 based on these demand and situation while focusing the risk management based on data mining techniques [1,2]. However, the study of the risk management has a long history on the basis of mathematical statistics, and the mathematical statistics is now making remarkable progress in the data analysis field. The successive workshop in this year, the International Workshop on Risk Informatics (RI2007), extended its scope to include the risk management by the data analysis based on both data mining and mathematical statistics.

Takashi Washio, Shusaku Tsumoto
Chance Discovery in Credit Risk Management
Time Order Method and Directed KeyGraph for Estimation of Chain Reaction Bankruptcy Structure

In this article, chance discovery method is applied to estimate chain reaction bankruptcy structure. Risk of default can be better forecasted by taking chain reaction effect into accont. Time order method and directed KeyGraph are newly introduced to distinguish and express the time order among defaults that is essential information for the analysis of chain reaction bankruptcy. The steps for the data analysis are introduced and result of example analysis with default data in Kyushu, Japan, 2005 is presented.

Shinichi Goda, Yukio Ohsawa
Risk Bias Externalization for Offshore Software Outsourcing by Conjoint Analysis

With the steady increase of volumes of software development, most Japanese companies are interested in offshore software outsourcing. In order to find out the know-how of experienced project managers and assess the risk bias brought by vendor countries and software types, this paper utilizes conjoint analysis method on questionnaire for project preference to externalize the tacit knowledge. After analyzing the

range

,

maximum

,

minimum

and

average

value of total utilities of three kinds of properties, we have the following findings: 1) the project property is the main item affecting success of outsourcing projects, which could lead to big success or severe failure, 2) the risk analysis result for vendors of India is different from that of China, which should be deeply investigated, 3) the risk value of middleware software is lower than that of the other two software types, which should be paid more attention to.

Zhongqi Sheng, Masayuki Nakano, Shingo Kubo, Hiroshi Tsuji
Extracting Failure Knowledge with Associative Search

We applied associative OR search on a Failure Knowledge Database with 1,242 failure accidents and 41 failure scenarios in the book “100 Scenarios of Failure” to find cases most analogous to risks that engineers were concerned with. Ninety engineers provided 203 input cases of risk concerns and the search for accidents most analogous to each input returned the most analogous accidents for 64% of the input cases of risk concerns within several minutes. Analogous scenario searches returned the most analogous scenarios for 63% of the input. Regular keyword AND searches take tens of minutes to narrow down the candidates to a few analogous cases, and thus associative search is a more effective tool for risk management.

Masayuki Nakao, Kensuke Tsuchiya, Yoshiaki Harita, Kenji Iino, Hiroshi Kinukawa, Satoshi Kawagoe, Yuji Koike, Akihiko Takano
Data Mining Analysis of Relationship Between Blood Stream Infection and Clinical Background in Patients Undergoing Lactobacillus Therapy

The aim of this study is to analyze the effects of lactobacillus therapy and the background risk factors on blood stream infection in patients by using data mining. The data were analyzed by data mining software, i.e. "

ICONS Miner

" (Koden Industry Co., Ltd.).The significant "If-then rules" were extracted from the decision tree between bacteria detection on blood samples and patients’ treatments, such as lactobacillus therapy, antibiotics, various catheters, etc. The chi-square test, odds ratio and logistic regression were applied in order to analyze the effect of lactobacillus therapy to bacteria detection. From odds ratio of lactobacillus absence to lactobacillus presence, bacteria detection risk of lactobacillus absence was about 2 (95%CI: 1.57-2.99). The significant "If-then rules", chi-square test, odds ratio and logistic regression showed that lactobacillus therapy might be the significant factor for prevention of blood stream infection. Our study suggests that lactobacillus therapy may be effective in reducing the risk of blood stream infection. Data mining is useful for extracting background risk factors of blood stream infection from our clinical database.

Kimiko Matsuoka, Shigeki Yokoyama, Kunitomo Watanabe, Shusaku Tsumoto
Discovery of Risky Cases in Chronic Diseases: An Approach Using Trajectory Grouping

This paper presents an approach to finding risky cases in chronic diseases using a trajectory grouping technique. Grouping of trajectories on hospital laboratory examinations is still a challenging task as it requires comparison of data with mutidimensionalty and temporal irregulariry. Our method first maps a set of time series containing different types of laboratory tests into directed trajectories representing the time course of patient states. Then the trajectories for individual patients are compared in multiscale and grouped into similar cases. Experimental results on the chronic hepatitis data demonstrated that the method could find the groups of discending trajectories that well corresponded to the cases of higher fibrotic stages.

Shoji Hirano, Shusaku Tsumoto

Learning with Logics and Logics for Learning

Frontmatter
The Fifth Workshop on Learning with Logics and Logics for Learning (LLLL2007)

The workshop on Learning with Logics and Logics for Learning (LLLL) was started in January 2002 in Sapporo, Japan, in order to encourage the interchange of computational logic and machine learning. After held twice as a domestic workshop, it was re-started in 2005 as an collocated international workshop with the Annual Conference of Japanese Society for Artificial Intelligence (JSAI).

In the past four workshops, we accepted 55 papers in total. We could classify them into two types. The first type is to introduce computational logic into machine learning, of which elements are Boolean algebra, clausal theories and structured data such as first-order terms. The second type is to provide and analyze semantics of logic and mathematics with machine learning, for example, clarifying the relation between computational algebra and machine learning.

Akihiro Yamamoto, Kouichi Hirata
Mining Maximal Flexible Patterns in a Sequence

We consider the problem of enumerating all maximal flexible patterns in an input sequence database for the class of flexible patterns, where a

maximal

pattern

(also called a closed pattern) is the most specific pattern among the equivalence class of patterns having the same list of occurrences in the input. Since our notion of maximal patterns is based on position occurrences, it is weaker than the traditional notion of maximal patterns based on document occurrences. Based on the framework of reverse search, we present an efficient depth-first search algorithm

MaxFlex

for enumerating all maximal flexible patterns in a given sequence database without duplicates in

$O(||{\mathcal{T}}||\times|\Sigma|)$

time per pattern and

$O(||{\mathcal T}||)$

space, where

$||{\mathcal T}||$

is the size of the input sequence database

$\mathcal T$

and |

Σ

| is the size of the alphabet on which the sequences are defined. This means that the enumeration problem for maximal flexible patterns is shown to be solvable in polynomial delay and polynomial space.

Hiroki Arimura, Takeaki Uno
Computing Characteristic Sets of Bounded Unions of Polynomial Ideals

The surprising fact that Hilbert’s basis theorem in algebra shows identifiabilty of ideals of polynomials in the limit from positive data is derived by the correspondence between ideals and languages in the context of machine learning. This correspondence also reveals the difference between the two and raises new problems to be solved in both of algebra and machine learning. In this article we solve the problem of providing a concrete form of the characteristic set of a union of two polynomial ideals. Our previous work showed that the finite basis of every polynomial ideal is its characteristic set, which ensures that the class of ideals of polynomials is identifiable from positive data. Union or set-theoretic sum is a basic set operation, and it could be conjectured that there is some effective method which produces a characteristic set of a union of two polynomial ideals if both of the basis of ideals are given. Unfortunately, we cannot find a previous work which gives a general method for how to find characteristic sets of unions of languages even though the languages are in a class identifiable from positive data. We give methods for computing a characteristic set of the union of two polynomial ideals.

Itsuo Takamatsu, Masanori Kobayashi, Hiroo Tokunaga, Akihiro Yamamoto
Towards a Logical Reconstruction of CF-Induction

CF-induction is a sound and complete hypothesis finding procedure for full clausal logic which uses the principle of inverse entailment to compute a hypothesis that logically explains a set of examples with respect to a prior background theory. Currently, CF-induction computes hypotheses by applying combinations of several complex generalisation operators to an intermediate theory called a bridge formula. In this paper we propose an alternative approach whereby hypotheses are derived from a bridge formula using a single deductive operator and a single inductive operator. We show that our simplified procedure preserves the soundness and completeness of CF-induction.

Yoshitaka Yamamoto, Oliver Ray, Katsumi Inoue

Juris-Informatics

Frontmatter
First International Workshop on Juris-Informatics

The First International Workshop on Juris-Informatics (ie: JURISIN workshop) was held on June 19, 2007 at World Convention Center Summit in Miyazaki, Japan, as a part of the Twenty First Annual Conference of Japanese Society for Artificial Intelligence (JSAI-2007). This workshop was organized to study legal issues from the perspective of informatics. Law is one of the oldest practical applications of computer science. Though lots of legal reasoning systems have been developed thus far, they were not supported by the lawyers, or they didn’t have a positive impact on juris-prudence. One of the reasons is that legal reasoning mechanisms currently implemented are too simple from the lawyer’s viewpoint. Another reason is that legal reasoning has been studied mainly from the viewpoint of logical aspects, but it has not been studied so much from the view point of natural language processing. If we can bring lawyers and informatics people and natural language processing people together, we can expect great advances in both informatics and jurisprudence by implementing legal reasoning systems clear to what lawyers expect.

Katsumi Nitta, Ken Satoh, Satoshi Tojo
Towards Translation of Legal Sentences into Logical Forms

This paper proposes a framework for translating legal sentences into logical forms in which we can check for inconsistency, and describes the implementation and experiment of the first experimental system. Our logical formalization conforms to Davidsonian Style, which is suitable for languages allowing expressions with zero-pronouns such as Japanese. We examine our system with actual data of legal documents. As a result, the system was 78% of accurate in terms of deriving predicates with bound variables. We discuss our plan for further development of the system from the viewpoint of the following two aspects: (1) improvement of accuracy (2) formalization of output necessary for logical processing.

Makoto Nakamura, Shunsuke Nobuoka, Akira Shimazu
Automatic Consolidation of Japanese Statutes Based on Formalization of Amendment Sentences

To realize computer-supported works in the areas of legislation and the practical use of laws, a statute database is indispensable so that users can easily retrieve desired versions of desired statutes. However, almost all former versions of statutes need to be restored since they cannot be easily obtained or digitalized. This task can be achieved by repeatedly consolidating amendment statutes with each version of the statute from the first one. In this paper, for Japanese statutes, we show that amendment clauses, which are parts of amendment sentences, can be formalized in terms of sixteen regular expressions. We also propose an automatic consolidation system for Japanese statutes based on the formalization and experts’ knowledge about consolidation. System evaluation is shown through a consolidation experiment.

Yasuhiro Ogawa, Shintaro Inagaki, Katsuhiko Toyama
Characterized Argument Agent for Training Partner

For the resolution of disputes, Alternative Dispute Resolution (ADR) has become a popular replacement for trials. However, for mediation to work, the mediator must undergo extensive training. To help with mediator training, we have developed an online mediation support system. In this paper, we present an overview of the system and an argument agent. The argument agent participates in moot mediation as a disputant for self-training purposes. We also explain how an agent’s text response can be generated by retrieving from a case base situations which are likely to be met with in dispute resolution.

Takahiro Tanaka, Norio Maeda, Daisuke Katagami, Katsumi Nitta
Assumption-Based Argumentation for Closed and Consistent Defeasible Reasoning

Assumption-based argumentation is a concrete but generalpurpose argumentation framework that has been shown, in particular, to generalise several existing mechanisms for non-monotonic reasoning, and is equipped with a computational counterpart and an implemented system. It can thus serve as a computational tool for argumentation-based reasoning, and for automatising the process of finding solutions to problems that can be understood in assumption-based argumentation terms. In this paper we consider the problem of reasoning with defeasible and strict rules, for example as required in a legal setting. We provide a mapping of defeasible reasoning into assumption-based argumentation, and show that the framework obtained has properties of closedness and consistency, that have been advocated elsewhere as important for defeasible reasoning in the presence of strict rules. Whereas other argumentation approaches have been proven closed and consistent under

some

specific semantics, we prove that assumption-based argumentation is closed and consistent under

all

argumentation semantics.

Francesca Toni
Backmatter
Metadata
Title
New Frontiers in Artificial Intelligence
Editors
Ken Satoh
Akihiro Inokuchi
Katashi Nagao
Takahiro Kawamura
Copyright Year
2008
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-78197-4
Print ISBN
978-3-540-78196-7
DOI
https://doi.org/10.1007/978-3-540-78197-4

Premium Partner