Skip to main content




Continuity and Change (Activity) Are Fundamentally Related in DEVS Simulation of Continuous Systems

The success of DEVS methods for simulating large continuous models calls for more in-depth examination of the applicability of discrete events in modeling continuous phenomena. We present a concept of event set and an associated measure of activity that fundamentally characterize discrete representation of continuous behavior. This metric captures the underlying intuition of continuity as well as providing a direct measure of the computational work needed to represent continuity on a digital computer. We discuss several application possibilities beyond high performance simulation such as data compression, digital filtering, and soft computation. Perhaps most fundamentally we suggest the possibility of dispensing with the mysteries of traditional calculus to revolutionize the prevailing educational paradigm.

Bernard P. Zeigler, Rajanikanth Jammalamadaka, Salil R. Akerkar

Systems Theory: Melding the AI and Simulation Perspectives

The discipline of modelling and simulation (MaS) preceded artificial intelligence (AI) chronologically. Moreover, the workers in one area are typically unfamiliar with, and sometimes unsympathetic to, those in the other. One reason for this is that in MaS the formal tools tend to center around analysis and probability theory with statistics, while in AI there is extensive use of discrete mathematics of one form or another, particularly logic. Over the years however, MaS and AI developed many frameworks and perspectives that are more similar than their respective practitioners may care to admit. We will argue in this paper that these parallel developments have led to some myopia that should be overcome because techniques and insights borrowed from the other discipline can be very beneficial.

Norman Foo, Pavlos Peppas

Modeling and Simulation Methodologies I

Unified Modeling for Singularly Perturbed Systems by Delta Operators: Pole Assignment Case

A unified modeling method by using the


-operators for the singularly perturbed systems is introduced. The unified model unifies the continuous and the discrete models. When compared with the discrete model, the unified model has an improved finite word-length characteristics and its


-operator is handled conveniently like that of the continuous system. In additions, the singular perturbation method, a model approximation technique, is introduced. A pole placement example is used to show such advantages of the proposed methods. It is shown that the error of the reduced model in its eigenvalues is less than the order of


(singular perturbation parameter). It is shown that the error in the unified model is less than that of the discrete model.

Kyungtae Lee, Kyu-Hong Shim, M. Edwin Sawan

A Disaster Relief Simulation Model of a Building Fire

We will introduce a basic simulation model, using SOARS. We designed the simple simulation model of a building fire to describe the agents’ escaping action from a fire. In this simulation, a fire brakes out in a building suddenly. A fire rapidly spreads across the floor, and even to the next floor. The people have to escape from the building when they find or notice the fire. We show the algorithm and the result, how the fire spreads the floor and the people escape from the fire, of this model and the possibility of future model.

Manabu Ichikawa, Hideki Tanuma, Yusuke Koyama, Hiroshi Deguchi

Evaluation of Transaction Risks of Mean Variance Model Under Identical Variance of the Rate of Return – Simulation in Artificial Market

Mean Variance (MV) model has spread through institutional investors as one of the most typical diversified investment model. MV model defines the investment risks with the variance of the rate of return. Therefore, if any variances of two portfolios are equal, MV model will judge that the investment risks are identical. However, even if variances are equal, two different risk cases will occur. One is just depended on market volume. The other is fully depended on speculators who raise stock prices when institutional investors are purchasing stocks. Consequently, the latter makes institutional investors pay excessive transaction costs. Development of ABM (Agent Based Modeling) in recent years makes it possible to analyze this kind of problem by simulation. In this paper, we formulate a financial market model where institutional investors and speculators trade twenty stocks simultaneously. Results of simulation show that even if variances are equal, investment risks are not identical.

Ko Ishiyama, Shusuke Komuro, Hideki Tanuma, Yusuke Koyama, Hiroshi Deguchi

Intelligent Control

Association Rule Discovery in Data Mining by Implementing Principal Component Analysis

This paper presents the Principal Component Analysis (PCA) which is integrated in the proposed architectural model and the utilization of apriori algorithm for association rule discovery. The scope of this study includes techniques such as the use of devised data reduction technique and the deployment of association rule algorithm in data mining to efficiently process and generate association patterns. The evaluation shows that interesting association rules were generated based on the approximated data which was the result of dimensionality reduction, thus, implied rigorous and faster computation than the usual approach. This is attributed to the PCA method which reduces the dimensionality of the original data prior to the processing. Furthermore, the proposed model had verified the premise that it could handle sparse information and suitable for data of high dimensionality as compared to other technique such as the wavelet transform.

Bobby D. Gerardo, Jaewan Lee, Inho Ra, Sangyong Byun

Reorder Decision System Based on the Concept of the Order Risk Using Neural Networks

Due to the development of the modern information technology, many companies share the real-time inventory information. Thus the reorder decision using the shared information becomes a major issue in the supply chain operation. However, traditional reorder decision policies do not utilize the shared information effectively, resulting in the poor performance in distribution supply chains. Moreover, typical assumption in the traditional reorder decision systems that the demand pattern follows a specific probabilistic distribution function limits practical application to real situations where such probabilistic distribution function is not easily defined. Thus, we develop a reorder decision system based on the concept of the order risk using neural networks. We train the neural networks to learn the optimal reorder pattern that can be found by analyzing the historical data based on the concept of the order risk. Simulation results show that the proposed system gives superior performance to the traditional reorder policies. Additionally, managerial implication is provided regarding the environmental characteristics where the performance of the proposed system is maximized.

Sungwon Jung, Yongwon Seo, Chankwon Park, Jinwoo Park

Simulation Modeling with Hierarchical Planning: Application to a Metal Manufacturing System

There has been an increasing volume of research that combines artificial intelligence (AI) and simulation in the last decade to solve the problems of various kinds, some of which are related to manufacturing systems. In modern manufacturing industries, automatic systems composed of computers are common, and these systems are continuing to enhance the efficiency of the manufacturing process by analyzing the overall production process – from design to manufacturing. This paper deals with the problem regarding how to improve the productivity of a metal grating manufacturing system. To solve this problem, we proposed and applied Hierarchical RG-DEVS formalism, which is a modeling methodology for incorporating the hierarchical regression planning of AI and simulation, for constructing an environment for sound modeling. This research presents not only an improvement of the metal production design environment that can predict efficiency in the manufacturing process, but also a cooperation technique for AI planning and simulation.

Mi Ra Yi, Tae Ho Cho

Computer and Network Security I

Vulnerability Modeling and Simulation for DNS Intrusion Tolerance System Construction

To construct the ITS(Intrusion Tolerance System), we should concern not only the FTS(Fault Tolerant System) requirements but also intrusion and vulnerability factors. But, in the ITS, we can not take into account the intrusion and vulnerability as they are, because the characteristics and pattern of them is unknown. So, we suggest vulnerability analysis method that enable ITS to know the pattern of vulnerability exploitation more specifically. We make use of the atomic vulnerability concept to analyze the vulnerability in DNS system, and show how to make use of the analysis result as monitoring factors in our DNS ITS system. Also, this analysis result is used in modeling and simulation to see the dynamics of computer network for vulnerability and external malicious attack. This paper shows simulation execution examples making use of the vulnerability analysis result.

Hyung-Jong Kim

NS-2 Based IP Traceback Simulation Against Reflector Based DDoS Attack

Reflector attack belongs to one of the most serious types of Distributed Denial-of-Service (


) attacks, which can hardly be traced by traceback techniques, since the marked information written by any routers between the attacker and the reflectors will be lost in the replied packets from the reflectors. In response to such attacks, advanced IP traceback technology must be suggested. This study proposed a

NS-2 based traceback system

for simulating


technique that identifies DDoS traffics with multi-hop iTrace mechanism based on TTL information at reflector for malicious reflector source trace. According to the result of simulation, the proposed technique reduced network load and improved filter/traceback performance on distributed reflector attacks.

Hyung-Woo Lee, Taekyoung Kwon, Hyung-Jong Kim

Recognition of Human Action for Game System

Using human action, playing a computer game can be more intuitive and interesting. In this paper, we present a game system that can be operated using a human action. For recognizing the human actions, the proposed system uses a Hidden Markov Model (HMM). To assess the validity of the proposed system we applied to a real game, Quake II. The experimental results verify the feasibility and validity of this game system.This system is currently capable of recognizing 13 gestures, corresponding to 20 keyboard and mouse commands for Quake II game.

Hye Sun Park, Eun Yi Kim, Sang Su Jang, Hang Joon Kim

The Implementation of IPsec-Based Internet Security System in IPv4/IPv6 Network

IPsec has now become a standard information security technology throughout the Internet society. It provides a well-defined architecture that takes into account confidentiality, authentication, integrity, secure key exchange and protection mechanism against replay attack also. For the connectionless security services on packet basis, IETF IPsec Working Group has standardized two extension headers (AH&ESP), key exchange and authentication protocols. It is also working on lightweight key exchange protocol and MIB’s for security management. IPsec technology has been implemented on various platforms in IPv4 and IPv6, gradually replacing old application-specific security mechanisms. In this paper, we propose the design and implementation of controlled Internet security system, which is IPsec-based Internet information security system in IPv4/IPv6 network and also we show the data of performance measurement. The controlled Internet security system provides consistent security policy and integrated security management on IPsec-based Internet security system.

So-Hee Park, Jae-Hoon Nah, Kyo-Il Chung

HLA and Simulator Interoperation

Describing the HLA Using the DFSS Formalism

The High Level Architecture (HLA) is a standard for enabling the interoperability of simulation systems in a distributed computer environment. The HLA has been established by the US/DMSO and it is aimed to promote the communication among complex simulators. In this paper, we present a formal representation of the modeling aspects of the HLA using the

Discrete Flow System Specification

(DFSS), a modeling formalism aimed to represent dynamic structure systems. The HLA has introduced concepts not typically used in the modeling and simulation field. A formal representation of the HLA permits its description using standard constructs, making the HLA more easily intelligible by a larger community of users. A formal description also permits to highlight the current limitations of the HLA when compared with a more complete formal framework like the DFSS formalism that supports a wider range of dynamic structure systems.

Fernando Barros

Proposal of High Level Architecture Extension

The paper proposes three dimensional extension to High Level ARchitecture (HLA) and Runtime Infrastructure (RTI) to solve several issues such as security, information hiding problem and interoperability and performance of RTI software. The hierarchical and modular design of RTI software provides natural way to form complex distributed simulation systems and methods to tune performance of federates with selective and replaceable modules. The extension of specification level from application programming interface (API) to message-based protocols makes RTI software communicate each other and even with the protocol-talking hardware. The extension includes new APIs to the Federate Interface Specification, improving reusability of federates.

Jae-Hyun Kim, Tag Gon Kim

High Performance Modeling for Distributed Simulation

This paper presents a modeling of a distributed simulation system with a data management scheme. The scheme focuses on a distributed simulation concept, which is load balancing, suggests distribution of a different functionality to each distributed component, and assigns various degrees of communication and computation loads in each component. In addition, this paper introduces a design with an inter-federation communication on HLA-compliant distributed simulation. The design focuses on integration among multiple federations and constructs the larger distributed simulation system by suggesting a HLA bridge connection. The integration supports system simulation flexibility and scalability. This paper discusses design issues of a practical system with a HLA bridge for inter-federation communication. This paper analyzes and evaluates performance and scalability of the data management scheme with load balancing on distributed simulation, especially with inter-federation and inside-federation communication configurations. The analytical and empirical results on a heterogeneous OS distributed simulation show improvement of system performance and scalability by using data management and inter-federation communication.

Jong Sik Lee

The Hierarchical Federation Architecture for the Interoperability of ROK and US Simulations

This paper presents the hierarchical federation architecture as it applies to the ROK-US combined exercises, such as Ulchi Focus Lens (UFL). We have analyzed and extracted the necessary improvements through the review of the simulation architecture currently used for ROK-US combined exercises and from the current ROK Armed Forces modeling and simulation (M&S) utilization. We have designed an advanced federation architecture based on a multi-federation architecture. Moreover, we have validated the usability and technical risks of our proposed architecture through development of a pilot system and its testing. Finally, we expect that this architecture will provide an enhancement in the ROK-US combined exercises while reducing costs. Furthermore, we believe that this architecture is an example to interoperate simulations with other allies.

Seung-Lyeol Cha, Thomas W. Green, Chong-Ho Lee, Cheong Youn


PPSS: CBR System for ERP Project Pre-planning

At the initiation stage of project, a project pre-planning is an essential job for project success, especially for large-scale information system projects like ERP (Enterprise Resource Planning). Systematic estimation of resources like time, cost and manpower is very important but difficult because of the following reasons: 1) it involves lots of factors and their relationships, 2) it is not easy to apply mathematical model to the estimation, and 3) every ERP project is different from one another. In this article, we propose a system named PPSS (Project Pre-planning Support System) that helps the project manager to make a pre-plan of ERP project with case-based reasoning (CBR). He can make a project pre-plan by adjusting the most similar case retrieved from the case base. We adopt rule-based reasoning for the case adjustment which is one of the most difficult jobs in CBR. We have collected ERP implementation cases by interviewing with project managers, and organized them with XML (Extensible Markup Language)

Suhn Beom Kwon, Kyung-shik Shin

A Scheduling Analysis in FMS Using the Transitive Matrix

The analysis of the scheduling problem in FMS using the transitive matrix has been studied. Since the control flows in the Petri nets are based on the token flows, the basic unit of concurrency (short BUC) could be defined to be a set of the executed control flows in the net. In addition, original system could be divided into some subnets such as BUC of the machine’s operations and analyzed the feasibility time in each schedule. The usefulness of transitive matrix to slice off some subnets from the original net, and the explanation in an example will be discussed.

Jong-Kun Lee

Simulation of Artificial Life Model in Game Space

Game designer generally fixes the distribution and characteristics of game characters. These could not possibly be changed during playing a game. The point of view at online game player, usually the game playing time is getting longer. So online game is bored because of fixed distribution and characteristics of game characters. In this study, we propose and simulate the system about distribution and characteristics of NPCs. NPCs’ special qualities can be evolved according to their environments by applying gene algorithm. It also produces various special quality NPCs from a few kinds of NPCs through evolution. Game character group’s movement can be expressed more realistically by applied Flocking algorithm.

Jai Hyun Seu, Byung-Keun Song, Heung Shik Kim

An Extensible Framework for Advanced Distributed Virtual Environment on Grid

This paper describes a new framework for Grid-enabled advanced DVE (Distributed Virtual Environment) which provides a dynamic execution environment by supporting discovery and configuration of resources, mechanism of security, efficient data management and distribution. While the previous DVEs have provided

static execution environment

only considering communication functions and efficient application performance, the proposed framework adds resource, security management, extended data management to static execution environment using Grid services, and then brings

dynamic execution environment

which result in QoS (Quality of Service) enhanced environment better performance.The framework consists of two components: Grid-dependent component and Communication-dependent component. Grid-dependent component includes RM (Resource Manager), SDM (Static Data Manager), DDM (Dynamic Data Manager), SYM (Security Manager). Communication-dependent component is composed of SNM (Session Manager) and OM (Object Manager). The components enhance performance and scalability through the DVEs reconfiguration considering resources, and provides mutual authentication mechanism of both servers and clients for protection of resources, application and user data. Moreover, effective data management reduces overhead and network latency by data transmission and replication.

Seung-Hun Yoo, Tae-Dong Lee, Chang-Sung Jeong

Agent-Based Modeling

Diffusion of Word-of-Mouth in Segmented Society: Agent-Based Simulation Approach

The present research examines, by using agent-based simulation, how word-of-mouth about a new product spreads over an informal network among consumers. In particular, we focus on clarifying relationship between diffusion of word-of-mouth and network structure of the society. Whether or not there is any essential difference of diffusion process between in a mosaic and in an oligopolistic society is one of our main questions. The findings obtained not only are insightful and interesting in academic sense, but also provide useful suggestions to marketing practice.

Kyoichi Kjima, Hisao Hirata

E-mail Classification Agent Using Category Generation and Dynamic Category Hierarchy

With e-mail use continuing to explode, the e-mail users are demanding a method that can classify e-mails more and more efficiently. The previous works on the e-mail classification problem have been focused on mainly a binary classification that filters out spam-mails. Other approaches used clustering techniques for the purpose of solving multi-category classification problem. But these approaches are only methods of grouping e-mail messages by similarities using distance measure. In this paper, we propose of e-mail classification agent combining category generation method based on the vector model and dynamic category hierarchy reconstruction method. The proposed agent classifies e-mail automatically whenever it is needed, so that a large volume of e-mails can be managed efficiently

Sun Park, Sang-Ho Park, Ju-Hong Lee, Jung-Sik Lee

The Investigation of the Agent in the Artificial Market

In this paper, we investigate the investment strategy in the artificial market called U-Mart, which is designed to provide a common test bed for researchers in the fields of economics and information sciences. UMIE is the international experiment of U-Mart as a contests of trading agents. We attended UMIE 2003 and 2004, and our agent won the championship in both experiments. We examin why this agent is strong in UMIE environment. The strategy of this agent is called “on-line learning” or “real-time learning”. Concretely, the agent exploits and forecasts futures price fluctuations by means of identifying the environment in reinforcement learning.

We examined an efficiency of price forecasting in the classified environment. To examine the efficacy of it, we executed experiments 1000 times with UMIE open-type simulation standard toolkits, and we verified that forecasting futures price fluctuation in our strategy is useful for better trading.

Takahiro Kitakubo, Yuhsuke Koyama, Hiroshi Deguchi

Plan-Based Coordination of a Multi-agent System for Protein Structure Prediction

In this paper, we describe the design and implementation of a multi-agent system that supports prediction of three dimensional structure of an unknown protein from its sequence of amino acids. Some of the problems involved are such as how many agents should be coordinated to support prediction of protein’s structures as well as how we devise agents from multiple resources. To address these problems, we propose a plan-based coordination mechanism for our multi-agent system. In our proposed system, MAPS, The control agent coordinates other agents based on a specific multi-agent plan, which specifies possible sequences of interactions among agents. This plan-based coordination mechanism has greatly increased both coherence and flexibility of our system.

Hoon Jin, In-Cheol Kim

DEVS Modeling and Simulation

Using Cell-DEVS for Modeling Complex Cell Spaces

Cell-DEVS is an extension to the DEVS formalism that allows the definition of cellular models. CD++ is a modeling and simulation tool that implements DEVS and Cell-DEVS formalisms. Here, we show the use of these techniques through different application examples. Complex applications can be implemented in a simple fashion, and they can be executed effectively. We present example models of wave propagation, a predator following prey while avoiding natural obstacles, an evacuation process, and a flock of birds.

Javier Ameghino, Gabriel Wainer

State Minimization of SP-DEVS

If there exists a minimization method of DEVS in terms of behavioral equivalence, it will be very useful for analysis of huge and complex DEVS models. This paper shows a polynomial-time state minimization method for a class of DEVS, called schedule-preserved DEVS (SP-DEVS) whose states are finite. We define the behavioral equivalence of SP-DEVS and propose two algorithms of compression and clustering operation which are used in the minimization method.

Moon Ho Hwang, Feng Lin

DEVS Formalism: A Hierarchical Generation Scheme

System reproduction model to the growing system structure can be provided to design modeling formalisms for variable system architectures having historical characteristics. We introduce a DEVS (Discrete Event System Specifications)-based extended formalism that system structure gradually grows through self-reproduction of system components. As extended-atomic model of a system component makes virtual-child atomic DEVS models, a coupled model can be derived from coupling the parent atomic model and virtual-child atomic models. When a system component model reproduces its system component, a child component model can receive its parent model characteristics including determined role or behavior, and include different structure model characteristics. A virtual-child model that has its parent characteristics can also reproduce its virtual-child model, which may show similar attributes of the grand-parent model. By self-reproducible DEVS (SR-DEVS) modeling, we provide modeling specifications for variable network architecture systems.

Sangjoon Park, Kwanjoong Kim

Modeling and Simulation Methodologies II

Does Rational Decision Making Always Lead to High Social Welfare?

Dynamic Modeling of Rough Reasoning

The purpose of this paper is two-fold: The first is to propose a dynamic model for describing rough reasoning decision making. The second is to show that involvement of some irrational decision makers in society may lead to high social welfare by analyzing the centipede game in the framework of the model. In perfect information games, though it is theoretically able to calculate reasonable equilibria precisely by backward induction, it is practically difficult to realize them. In order to capture such features, we first develop a dynamic model assuming explicitly that the players may make mistakes due to rough reasoning. Next, we will apply it to the centipede game. Our findings include there is a case that neither random nor completely rational, moderate rational society maximize the frequency of cooperative behaviors. This result suggests that society involving some rough reasoning decision-makers may lead to socially more desirable welfare, compared to completely rational society.

Naoki Konno, Kyoichi Kijima

Large-Scale Systems Design: A Revolutionary New Approach in Software Hardware Co-design

The need for a revolutionary new approach to software hardware co-design stems from the unique demands that will be imposed by the complex systems in the coming age of networked computational systems (NCS). In a radical departure from tradition, tomorrow’s systems will include analog hardware, synchronous and asynchronous discrete hardware, software, and inherently asynchronous networks, all governed by asynchronous control and coordination algorithms. There are three key issues that will guide the development of this approach. First, conceptually, it is difficult to distinguish hardware fro m software. Although intuitively, semiconductor ICs refer to hardware while software is synonymous to programs, clearly, any piece of hardware may be replaced by a program while any software code may be realized in hardware. The truth is that hardware and software are symbiotic, i.e., one without the other is useless, and the difference between them is that hardware is faster but inflexible while software is flexible and slow. Second, a primary cause underlying system unreliability lies at the boundary of hardware and software. Traditionally, software engineers focus on programming while hardware engineers design and develop the hardware. Both types of engineers work off a set of assumptions that presumably define the interface between hardware and software. In reality, these assumptions are generally ad hoc and rarely understood in depth by either types of engineers. As a result, during the life of a system, when the original hardware units are upgraded or replaced for any reason or additional software functions are incorporated to provide new functions, systems often exhibit serious behavior problems that are difficult to understand and repair. For example, in the telecommunications community, there is serious concern over the occurrence of inconsistencies and failures in the context of “feature interactions” and the current inability to understand and reason about these events. While private telephone numbers are successfully blocked from appearing on destination caller Id screens under normal operation, as they should be, these private numbers are often unwittingly revealed during toll-free calls. It is hypothesized that many of these problems stem from the continuing use of legacy code from previous decades where timer values were determined corresponding to older technologies and have never been updated for today’s much faster electronics. In TCP/IP networking technology, the values of many of the timer settings and buffer sizes are handed down from the past and the lack of a scientific methodology makes it difficult to determine their precise values corresponding to the current technology. The mismatch at the hardware software interface represent vulnerabilities that tempt perpetrators to launch system attacks. Third, while most traditional systems employ synchronous hardware and centralized software, complex systems in the NCS age must exploit asynchronous hardware and distributed software executing asynchronous on geographically dispersed hardware to meet performance, security, safety, reliability, and other requirements. In addition, while many complex systems in the future including those in automobiles and space satellites will incorporate both analog and discrete hardware subsystems, others will deploy networks in which interconnections may be dynamic and a select set of entities mobile.

Sumit Ghosh

Timed I/O Test Sequences for Discrete Event Model Verification

Model verification examines the correctness of a model implementation with respect to a model specification. While being described from model specification, implementation prepares to execute or evaluate a simulation model by a computer program. Viewing model verification as a program test this paper proposes a method for generation of test sequences that completely covers all possible behavior in specification at an I/O level. Timed State Reachability Graph (TSRG) is proposed as a means of model specification. Graph theoretical analysis of TSRG has generated a test set of timed I/O event sequences, which guarantees 100% test coverage of an implementation under test.

Ki Jung Hong, Tag Gon Kim

Parallel and Distributed Modeling and Simulation I

A Formal Description Specification for Multi-resolution Modeling (MRM) Based on DEVS Formalism

Multi-Resolution Modeling (MRM) is a relatively new research area. With the development of distributed interactive simulation, especially as the emergence of HLA, multi-resolution modeling becomes one of the key technologies for advanced modeling and simulation. There is little research in the area of the theory of multi-resolution modeling, especially the formal description of MRM. In this paper, we present a new concept for the description of multi-resolution modeling, named multi-resolution model family (MF). A multi-resolution model family is defined as the set of different resolution models of the same entity. The description of MF includes two parts: models of different resolution and their relations. Based on this new concept and DEVS formalism, we present a new multi-resolution model system specification, named MRMS (Multi-Resolution Model system Specification). And we present and prove some important properties of MRMS, especially the closure of MRMS under coupling operation. MRMS provides a foundation and a powerful description tool for the research of MRM. Using this description, we can further study the theory and implementation of MRM.

Liu Baohong, Huang Kedi

Research and Implementation of the Context-Aware Middleware Based on Neural Network

Smart homes integrated with sensors, actuators, wireless networks and context-aware middleware will soon become part of our daily life. This paper describes a context-aware middleware providing an automatic home service based on a user’s preference inside a smart home. The context-aware middleware utilizes 6 basic data for learning and predicting the user’s preference on the home appliances: the pulse, the body temperature, the facial expression, the room temperature, the time, and the location. The six data sets construct the context model and are used by the context manager module. The user profile manager maintains history information for home appliances chosen by the user. The user-pattern learning and predicting module based on a neural network predicts the proper home service for the user. The testing results show that the pattern of an individual’s preferences can be effectively evaluated and predicted by adopting the proposed context model.

Jong-Hwa Choi, Soon-yong Choi, Dongkyoo Shin, Dongil Shin

An Efficient Real-Time Middleware Scheduling Algorithm for Periodic Real-Time Tasks

For real-time applications, the underlying operating system (OS) should support timely management of real-time tasks. However, most of current operating systems do not provide timely management facilities in an efficient way. There could be two approaches to support timely management facilities for real-time applications: (1) by modifying OS kernel and (2) by providing a middleware without modifying OS. In our approach, we adopted the middleware approach based on the TMO (Time-trigger Message-triggered Object) model which is a well-known real-time object model. The middleware, named TMSOM (TMO Support Middleware) has been implemented on various OSes such as Linux and Windows XP/NT/98. In this paper, we mainly consider TMOSM implemented on Linux (TMOSM/Linux). Although the real-time scheduling algorithm used in current TMOSM/Linux can produce an efficient real-time schedule, it can be improved for periodic real-time tasks by considering several factors. In this paper, we discuss those factors and propose an improved real-time scheduling algorithm for periodic real-time tasks. The proposed algorithm can improve system performance by making the structure of real-time middleware simpler.

Ho-Joon Park, Chang-Hoon Lee

Mapping Cooperating GRID Applications by Affinity for Resource Characteristics

The Computational Grid, distributed and heterogeneous collections of computers in the Internet, has been considered a promising platform for the deployment of various high-performance computing applications. One of the crucial issues in the Grid is how to discover, select and map possible Grid resources in the Internet for meeting given applications. The general problem of statically mapping tasks to nodes has been shown to be NP-complete. In this paper, we propose a mapping algorithm for cooperating Grid applications by the affinity for the resources, named as


. The proposed algorithm utilizes the general affinity of Grid applications for certain resource characteristics such as CPU speeds, network bandwidth, and input/output handling capability. To show the effectiveness of the proposed mapping algorithm, we compare the performance of the algorithm with some previous mapping algorithms by simulation. The simulation results show that the algorithm could effectively utilize the affinity of Grid applications and shows good performance.

Ki-Hyung Kim, Sang-Ryoul Han

Mobile Computer Network

Modeling of Policy-Based Network with SVDB

There are many security vulnerabilities in computer systems. They can be easily attacked by outsiders or abused by insiders who misuse their rights or who attack the security mechanisms in order to disguise as other users or to detour the security controls. Today’s network consists of a large number of routers and servers running a variety of applications. Policy-based network provides a means by which the management process can be simplified and largely automated. This article describes the modeling and simulation of a security system based on a policy-based network that has some merits. We present how the policy rules from vulnerabilities stored in SVDB (Simulation based Vulnerability Data Base) are inducted, and how the policy rules are transformed into PCIM (Policy Core Information Model). In the network security environment, each simulation model is hierarchically designed by DEVS (Discrete EVent system Specification) formalism.

Won Young Lee, Hee Suk Seo, Tae Ho Cho

Timestamp Based Concurrency Control in Broadcast Disks Environment

Broadcast disks are suited for disseminating information to a large number of clients in mobile computing environments. In this paper, we propose a

timestamp based concurrency control

(TCC) to preserve the consistency of read-only client transactions, when the values of broadcast data items are updated at the server. The TCC algorithm is novel in the sense that it can reduce the abort ratio of client transactions with minimal control information to be broadcast from the server. This is achieved by allocating the timestamp of a broadcast data item adaptively so that the client can allow more serializable executions with the timestamp. Using a simulation model of mobile computing environment, we show that the TCC algorithm exhibits substantial performance improvement over the previous algorithms.

Sungjun Lim, Haengrae Cho

Active Information Based RRK Routing for Mobile Ad Hoc Network

In mobile ad hoc network, unlike in wired networks, a path configuration should be in advance of data transmission along a routing path. Frequent movement of mobile nodes, however, makes it difficult to maintain the configured path and requires re-configuration of the path very often. It may also leads to serious problems such as deterioration of QoS in mobile ad-hoc networks. In this paper, we proposed a Reactive Routing Keyword (RRK) routing procedure to solve those problems. Firstly, we noticed it is possible in RRK routing to assign multiple routing paths to the destination node. We applied this feature into active networks and SNMP information based routing by storing unique keywords in cache of mobile nodes corresponding to present and candidate routings in a path configuration procedure. It was shown that the deterioration of QoS, which may be observed in DSR protocol, was greatly mitigated by using the proposed routing technique.

Soo-Hyun Park, Soo-Young Shin, Gyoo Gun Lim

Web-Based Simulation, Natural System

Applying Web Services and Design Patterns to Modeling and Simulating Real-World Systems

Simulation models can be created completely or partly from web services 1) to reduce the development cost, and 2) to allow heterogeneous applications to be integrated more rapidly and easily. In this paper, we present software design patterns useful for modeling and simulation. We illustrate how model federates can be built from web services and design patterns. We also show how the use of software design patterns can greatly help users to build scalable, reliable and robust simulation models with detailed performance analysis.

Heejung Chang, Kangsun Lee

Ontology Based Integration of Web Databases by Utilizing Web Interfaces

The amount of information which we can get from web environments, especially in the web databases, has rapidly grown. And, many questions often can be answered using integrated information than using single web database. Therefore the need for integrating web databases became increasingly. In order to make multiple web databases to interoperate effectively, the integrated system must know where to find the relevant information which is concerned with a user’s query on the web databases and which entities in the searched web databases meet the semantics in the user’s query. To solve these problems, this paper presents an approach which provides ontology based integration method for multiple web databases. The proposed approach uses the web interfaces through which generally a user can acquire desired information in the web databases.

Jeong-Oog Lee, Myeong-Cheol Ko, Hyun-Kyu Kang

A Web Services-Based Distributed Simulation Architecture for Hierarchical DEVS Models

The Discrete Event Systems Specification (DEVS) formalism specifies a discrete event system in a hierarchical, modular form. This paper presents a web-services-based distributed simulation architecture for DEVS models, named as


. DEVSCluster-WS is actually an enhanced version of DEVSCluster by employing the web services technology, thereby retaining the advantages of the non-hierarchical distributed simulation compared to the previous hierarchical distributed simulations. By employing the web services technologies, it describes models by WSDL and utilizes SOAP and XML for inter-node communication. Due to the standardized nature of the web service technology, DEVSCluster-WS can effectively be embedded in the Internet without adhering to specific vendors and languages. To show the effectiveness of DEVSCluster-WS, we realize it in Visual C++ and SOAPToolkit, and conduct a benchmark simulation for a large-scale logistics system. We compare the performance of DEVSCluster-WS with DEVSCluster-MPI, the MPI-based implementation of DEVSCluster. The performance result shows that the proposed architecture works correctly and could achieve tolerable performance.

Ki-Hyung Kim, Won-Seok Kang

Modeling and Simulation Environments

Automated Cyber-attack Scenario Generation Using the Symbolic Simulation

The major objective of this paper is to propose the automated cyber-attack scenario generation methodology based on the symbolic simulation environment. Information Assurance is to assure the reliability and availability of information by preventing from attack. Cyber-attack simulation is one of noticeable methods for analyzing vulnerabilities in the information assurance field, which requires variety of attack scenarios. To do this, we have adopted the symbolic simulation that has extended a conventional numeric simulation. This study can 1) not only generate conventional cyber-attack scenarios but 2) generate cyber-attack scenarios still unknown, and 3) be applied to establish the appropriate defense strategies by analyzing generated cyber-attack. Simulation test performed on sample network system will illustrate our techniques.

Jong-Keun Lee, Min-Woo Lee, Jang-Se Lee, Sung-Do Chi, Syng-Yup Ohn

A Discrete Event Simulation Study for Incoming Call Centers of a Telecommunication Service Company

Call center becomes an important contact point, and an integral part of the majority of corporations. Managing a call center is a diverse challenge due to many complex factors. Improving performance of call centers is critical and valuable for providing better service. In this study we applied forecasting techniques to estimate incoming calls for a couple of call centers of a mobile telecommunication company. We also developed a simulation model to enhance performance of the call centers. The simulation study shows reduction in managing costs, and better customer’s satisfaction with the call centers.

Yun Bae Kim, Heesang Lee, Hoo-Gon Choi

Requirements Analysis and a Design of Computational Environment for HSE (Human-Sensibility Ergonomics) Simulator

Human-Sensibility Ergonomics (HSE) is to apprehend human sensitivity features by measuring human senses and developing index tables related to psychology and physiology. One of the main purposes of HSE may be developing human-centered goods, environment and relevant technologies for an improved life quality. In order to achieve the goal, a test bed or a simulator can be a useful tool in controlling and monitoring a physical environment at will. This paper deals with requirements, design concepts, and specifications of the computing environment for the HSE, which is a part of HSE Technology Development Program sponsored by Korean Ministry of Science and Technology. The integrated computing system is composed of real-time and non-real-time environments. The non-real-time development environment comprises several PC’s with Windows NT and their graphical user interfaces coded in Microsoft’s Visual C++. Each PC independently controls and monitors a thermal or a light or an audio or a video environment. Each of software and database, developed in the non-real-time environment, is directly ported to the real-time environment through a local-area network. Then the real-time computing system, based on the cPCI bus, controls the integrated HSE environment and collects necessary information. The cPCI computing system is composed of a Pentium CPU board and dedicated I/O boards, whose quantities are determined with expandability considered. The integrated computing environment of the HSE simulator guarantees real-time capability, stability and expandability of the hardware, and to maximize portability, compatibility, maintainability of its software.

Sugjoon Yoon, Jaechun No, Jon Ahn

AI and Simulation

Using a Clustering Genetic Algorithm to Support Customer Segmentation for Personalized Recommender Systems

This study proposes novel clustering algorithm based on genetic algorithms (GAs) to carry out a segmentation of the online shopping market effectively. In general, GAs are believed to be effective on NP-complete global optimization problems and they can provide good sub-optimal solutions in reasonable time. Thus, we believe that a clustering technique with GA can provide a way of finding the relevant clusters. This paper applies GA-based K-means clustering to the real-world online shopping market segmentation case for personalized recommender systems. In this study, we compare the results of GA-based K-means to those of traditional K-means algorithm and self-organizing maps. The result shows that GA-based K-means clustering may improve segmentation performance in comparison to other typical clustering algorithms.

Kyoung-jae Kim, Hyunchul Ahn

System Properties of Action Theories

Logic-based theories of action and automata-based systems theories share concerns about state dynamics that are however not reflected by shared insights. As an example of how to remedy this we examine a simple variety of situation calculus theories from the viewpoint of system-theoretic properties to reveal relationships between them. These provide insights into relationships between logic-based solution policies, and are suggestive of similar relationships for more complex versions.

Norman Foo, Pavlos Peppas

Identification of Gene Interaction Networks Based on Evolutionary Computation

This paper investigates applying a genetic algorithm and an evolutionary programming for identification of gene interaction networks from gene expression data. To this end, we employ recurrent neural networks to model gene interaction networks and make use of an artificial gene expression data set from literature to validate the proposed approach. We find that the proposed approach using the genetic algorithm and evolutionary programming can result in better parameter estimates compared with the other previous approach. We also find that any a priori knowledge such as zero relations between genes can further help the identification process whenever it is available.

Sung Hoon Jung, Kwang-Hyun Cho

Component-Based Modeling

Modeling Software Component Criticality Using a Machine Learning Approach

During software development, early identification of critical components is of much practical significance since it facilitates allocation of adequate resources to these components in a timely fashion and thus enhance the quality of the delivered system. The purpose of this paper is to develop a classification model for evaluating the criticality of software components based on their software characteristics. In particular, we employ the radial basis function machine learning approach for model development where our new, innovative algebraic algorithm is used to determine the model parameters. For experiments, we used the USA-NASA metrics database that contains information about measurable features of software systems at the component level. Using our principled modeling methodology, we obtained parsimonious classification models with impressive performance that involve only design metrics available at earlier stage of software development. Further, the classification modeling approach was non-iterative thus avoiding the usual trial-and-error model development process.

Miyoung Shin, Amrit L. Goel

Component Architecture Redesigning Approach Using Component Metrics

Components are reusable software building blocks that can be quickly and easily assembled into new systems. Many people think the primary objective of components is reuse. The best reuse is reuse of the design rather than implementation. So, it is necessary to study the component metrics that can be applied in the stage of the component analysis and design. In this paper, we propose component architecture redesigning approach using the component metrics. The proposed component metrics reflect the keynotes of component technology, base on the similarity information about behavior patterns of operations to offer the component’s service. Also, we propose the component architecture redesigning approach. That uses the clustering principle, makes the component design as the independent functional unit having the high-level reusability and cohesion, low level complexity and coupling.

Byungsun Ko, Jainyun Park

A Workflow Variability Design Technique for Dynamic Component Integration

Software development by component integration is the mainstream for Time-to-Market and is the solution for overcoming the short lifecycle of software. Therefore, the effective techniques for component integration have been working. But, the systematic and practical technique has not been proposed. Main issues for component integration are a specification for integration and the component architecture for operating the specification. In this paper, we propose a workflow variability design technique for component integration. This technique focuses on designing the connection contract based on the component architecture. The connection contract is designed to use the provided interface of component and the architecture can assemble and customize components through the connection contract dynamically.

Chul Jin Kim, Eun Sook Cho

Watermarking, Semantic

Measuring Semantic Similarity Based on Weighting Attributes of Edge Counting

Semantic similarity measurement can be applied in many different fields and has variety of ways to measure it. As a foundation paper for semantic similarity, we explored the edge counting method for measuring semantic similarity by considering the weighting attributes from where they affect an edge’s strength. We considered the attributes of scaling depth effect and semantic relation type extensively. Further, we showed how the existing edge counting method could be improved by considering virtual connection. Finally, we compared the performance of the proposed method with a benchmark set of human judgment of similarity. The results of proposed measure were encouraging compared with other combined approaches.

JuHum Kwon, Chang-Joo Moon, Soo-Hyun Park, Doo-Kwon Baik

3D Watermarking Shape Recognition System Using Normal Vector Distribution Modelling

We developed the shape recognition system with 3D watermarking using normal vector distribution. The 3D shape recognition system consists of laser beam generator, linear CCD imaging system, and digital signal processing hardware and software. 3D Watermark algorithm is embedded by 3D mesh model using each patch EGI distribution. The proposed algorithm divides a 3D mesh model into 4 patches to have the robustness against the partial geometric deformation. Plus, it uses EGI distributions as the consistent factor that has the robustness against the topological deformation. To satisfy both geometric and topological deformation, the same watermark bits for each subdivided patch are embedded by changing the mesh normal vectors. Moreover, the proposed algorithm does not need the original mesh model and the resampling process to extract the watermark. Experimental results verify that the proposed algorithm is imperceptible and robust against geometrical and topological attacks.

Ki-Ryong Kwon, Seong-Geun Kwon, Suk-Hwan Lee

DWT-Based Image Watermarking for Copyright Protection

A discrete wavelet transform (DWT)-based image watermarking algorithm is proposed, where the original image is not required for watermark extracting, is embedded watermarks into the DC area while preserving good fidelity and is achieved by inserting the watermark in subimages obtained through subsampling. Experimental results demonstrate that the proposed watermarking is robust to various attacks.

Ho Seok Moon, Myung Ho Sohn, Dong Sik Jang

Cropping, Rotation and Scaling Invariant LBX Interleaved Voice-in-Image Watermarking

The rapid development of digital media and communication network urgently has highlighted the need of data certification technology to protect IPR (Intellectual property rights). This paper proposed a new watermarking method for embedding the owner’s voice signal using our LBX (Linear Bit-eXpansion) interleaving. This method uses a voice signal as a watermark to be embedded, which makes it quite useful in claiming ownership, and has the advantage of restoring a voice signal that has been modified and removed by image removing attacks by applying our LBX interleaving. Three basic stages of this watermarking include: 1) Encode the analogue owner’s voice signal by PCM and create new digital voice watermark; 2) Interleave a voice watermark by LBX; and 3) Embed the interleaved voice watermark in the low frequency band on DHWT (Discrete Haar Wavelet Transform) of the blue and red channels of the color image. Therefore the resulting model can be used to maximize the robustness against attacks for removing a part of image such as cropping, rotation and scaling, because de-interleaver can correct the modified watermark information.

Sung Shik Koh, Chung Hwa Kim

Parallel and Distributed Modeling and Simulation II

Data Aggregation for Wireless Sensor Networks Using Self-organizing Map

Sensor Networks have recently emerged as a ubiquitous computing platform. However, the energy constrained and limited computing resources of the sensor nodes present major challenges in gathering data. In this work, we propose a self-organizing method for aggregating data in ad-hoc wireless sensor networks. We present new network architecture, CODA (Cluster-based self-Organizing Data Aggregation), based on the Kohonen Self-Organizing Map to aggregate sensor data in cluster. Before deploying the network, we train the nodes to have the ability to classify the sensor data. Thus, it increases the quality of data and reduces data traffic as well as energy-conserving. Our simulation results show that CODA increases the accuracy of data than traditional aggregation of database system. Finally, we show a real-world platform, TIP, on that we will implement the idea.

SangHak Lee, TaeChoong Chung

Feasibility and Performance Study of a Shared Disks Cluster for Real-Time Processing

A great deal of research indicates that the shared disks (SD) cluster is suitable to high performance transaction processing, but the aggregation of SD cluster with real-time processing has not been investigated at all. By adopting cluster technology, the real-time services will be highly available and can exploit inter-node parallelism. In this paper, we investigate the feasibility of real-time processing in the SD cluster. Specifically, we evaluate the cross effect of real-time transaction processing algorithms and SD cluster algorithms with the simulation model of an SD-based real-time database system (SD-RTDBS).

Sangho Lee, Kyungoh Ohn, Haengrae Cho

A Web Cluster Simulator for Performance Analysis of the ALBM Cluster System

A distributed server cluster system is a cost-effective solution to provide scalable and reliable Internet services. In order to achieve high qualities in service, it is necessary to tune the system varying configurable parameters and employed algorithms that significantly affect system performance. In this purpose, we develop a simulator for performance analysis of a traffic-distribution cluster system, called the ALBM (Adaptive Load Balancing and Management) cluster. In this paper, we introduce the architecture of the proposed simulator. Major design considerations are given to the flexible structures that can be easily expanded for adding new features, such as new workloads and scheduling algorithms. With this simulator, we perform two simple cases of performance analysis: one is to find appropriate overload and underload thresholds and the other is to find the suitable scheduling algorithm for a given workload.

Eunmi Choi, Dugki Min

Dynamic Load Balancing Scheme Based on Resource Reservation for Migration of Agent in the Pure P2P Network Environment

Mobile agents are defined as processes which can be autonomously delegated or transferred among the hosts in a network in order to perform some computations on behalf of the user and co-operate with other agents. Currently, mobile agents are used in various fields, such as electronic commerce, mobile communication, parallel processing, search of information, recovery, and so on. In pure P2P network environment, if mobile agents that require computing resources rashly migrate to another peers without the consideration on the peer’s capacity of resources, the peer may have a problem that the performance can be degraded, due to the lack of resources. To solve this problem, we propose resource reservation based load balancing scheme of using RMA (Resource Management Agent) that monitors workload information of the peers and decides migrating agents and destination peers. In mobile agent migration procedure, if the resource of specific peer is already reserved, our resource reservation scheme prevents other mobile agents from allocating the resource.

Gu Su Kim, Kyoung-in Kim, Young Ik Eom

Visualization, Graphics and Animation I

Application of Feedforward Neural Network for the Deblocking of Low Bit Rate Coded Images

In this paper, we propose a novel post-filtering algorithm to reduce the blocking artifacts in block-based coded images using block classification and feedforward neural network. This algorithm exploited the nonlinearity property of the neural network learning algorithm to reduce the blocking artifacts more accurately. At first, each block is classified into four classes; smooth, horizontal edge, vertical edge, and complex blocks, based on the characteristic of their discrete cosine transform (DCT) coefficients. Thereafter, according to the class information of the neighborhood block, adaptive feedforward neural network is then applied to the horizontal and vertical block boundaries. That is, for each class a different multi-layer perceptron (MLP) is used to remove the blocking artifacts. Experimental results show that the proposed algorithm produced better results than those of the conventional algorithms both subjective and objective viewpoints.

Kee-Koo Kwon, Man-Seok Yang, Jin-Suk Ma, Sung-Ho Im, Dong-Sun Lim

A Dynamic Bandwidth Allocation Algorithm with Supporting QoS for EPON

An Ethernet PON (Passive Optical Network) is an economical and efficient access network that has received significant research attention in recent years. A MAC (Media Access Control) protocol of the PON, the next generation access network, is based primarily on TDMA (Time Division Multiple Access). In this paper, we addressed the problem of a dynamic bandwidth allocation in QoS (Quality-of-Service) based Ethernet PONs. We augmented the bandwidth allocation algorithms to support QoS in a differentiated service framework. Our differentiated bandwidth guarantee allocation (DBGA) algorithm allocates effectively and fairly the bandwidths between end-users. Moreover, we showed that a DBGA algorithm that perform weighted bandwidth allocation for high priority packets results in a better performance in terms of average and maximum packet delay, as well as the network throughput compared with some other dynamic allocation algorithms. We used the simulation to study the performance and validate the effectiveness of the proposed algorithm.

Min-Suk Jung, Jong-hoon Eom, Sang-Ryul Ryu, Sung-Ho Kim

A Layered Scripting Language Technique for Avatar Behavior Representation and Control

The paper proposes a layered scripting language technique for representation and control of avatar behavior for simpler avatar control in various domain environments. We suggest three layered architecture which is consisted of task-level behavior, high-level motion, and primitive motion script language. These layers brides gap between application domain and implementation environments, so that end user can control the avatar through easy and simple task-level scripting language without concerning low-level animation and the script can be applied various implementations regardless of application domain types. Our goal is to support flexible and extensible representation and control of avatar behavior by layered approach separating application domains and implementation tools.

Jae-Kyung Kim, Won-Sung Sohn, Beom-Joon Cho, Soon-Bum Lim, Yoon-Chul Choy

An Integrated Environment Blending Dynamic and Geometry Models

Modeling techniques tend to be found in isolated communities: geometry models in CAD and Computer Graphics and dynamic models in Computer Simulation. When models are included within the same digital environment, the ways of connecting them together seamlessly and visually are not well known even though elements from each model have many commonalities. We attempt to address this deficiency by studying specific ways in which models can be interconnected within the same 3D space through effective ontology construction and human interaction techniques. Our work to date has resulted in an environment built using the open source 3D Blender package, where we include a Python-based interface, as well as scene, geometry, and dynamics models.

Minho Park, Paul Fishwick

Computer and Network Security II

Linux-Based System Modelling for Cyber-attack Simulation

The major objective of this paper is to describe modeling on the linux-based system for simulation of cyber attacks. To do this, we have analyzed the Linux system from a security viewpoint and proposed the Linux-based system model using the DEVS modeling and simulation environment. Unlike conventional researches, we are able to i) reproduce the detail behavior of cyber-attack, ii) analyze concrete changes of system resource according to a cyber-attack and iii) expect that this would be a cornerstone for more practical application researches (generating cyber-attack scenarios, analyzing vulnerability and examining countermeasures, etc.) of security simulation. Several simulation tests performed on sample network system will illustrate our techniques.

Jang-Se Lee, Jung-Rae Jung, Jong-Sou Park, Sung-Do Chi

A Rule Based Approach to Network Fault and Security Diagnosis with Agent Collaboration

This paper introduces rule-based reasoning (RBR) Expert System for network fault and security diagnosis and a mechanism for optimization. In this system, we use agent collaboration mechanism which is the process that the system gathers network environment data from distributed agent. Collaboration mechanism make accurate diagnosis by making inferences based on the results of various tests and measurements. Additionally, we consider optimization of the system. In some comprehensive systems like network fault diagnosis system or system using decisive loops reasoning, so much more studies on its reasoning time are necessary. So we consider reasoning time and rule probability to optimize the system. For our purpose, rule reasoning time estimation algorithm will be used, with the simulation, where a comparison with the previous rule-based fault diagnosis system is possible.

Siheung Kim, Seong jin Ahn, Jinwok Chung, Ilsung Hwang, Sunghe Kim, Minki No, Seungchung Sin

Transient Time Analysis of Network Security Survivability Using DEVS

We propose the use of discrete event system specification (DEVS) in the transient time analysis of network security survivability. By doing the analysis with the time element, it is possible to predict the exact behaviors of attacked system with respect to time. In this work, we create a conceptual model with DEVS and simulate based on time and state variables change when events occur. Our purpose is to illustrate the unknown attacks’ features where a variety of discrete events with various attacks occur at diverse times. Subsequently, the response methods would be performed more precisely and this is a way to increase the survivability level in the target environment. It can also be used to predict the survivability attributes of complex systems while they are under development and preventing costly vulnerabilities before the system is built.

Jong Sou Park, Khin Mi Mi Aung

A Harmful Content Protection in Peer-to-Peer Networks

A variety of activities like commerce, education, voting is taking place in cyber-space these days. As the use of the Internet gradually increases, many side effects have appeared. Therefore, it is time to address the many troubling side effects associated with the Internet. In this paper, we propose a method of prevention of adult video and text in Peer-to-Peer networks and we evaluate text categorization algorithms for Peer-to-Peer networks. We also suggest a new image categorization algorithm for detecting adult images and compare its performance with existing commercial software.

Taekyong Nam, Ho Gyun Lee, Chi Yoon Jeong, Chimoon Han

Business Modeling

Security Agent Model Using Interactive Authentication Database

Authentication agent enables an authorized user to gain authority in the Internet or distributed computing systems. It is one of the most important problems that application server systems can identify many clients authorized or not. To protect many resources of web server systems or any other our computer systems, we should perform client authentication process in the Internet or distributed client server systems. Generally, a user can gain authority using the user’s ID and password. But using client’s password is not always secure because of various security attacks of many opponents. In this paper, we propose an authentication agent system model using an interactive authentication database. Our proposed agent system provides secure client authentication that add interactive authentication process to current systems based on user’s ID and password. Before a user requests a transaction for information processing to distributed application servers, the user should send a authentication key acquired from authentication database. The agent system requests an authentication key from the user to identify authorized user. The proposed authentication agent system can provide high quality of computer security using the two passwords, user’s own password and authentication key or password. The second authentication password can be acquired by authentication database in every request transaction without user’s input because of storing to client’s database when the user gets authority first. For more secure authentication, the agent system can modify the authentication database. Using the interactive database, the proposed agent system can detect intrusion during unauthorized client’s transaction using the authentication key because we can know immediately through stored the authentication password when a hackers attack out network or computer systems.

Jae-Woo Lee

Discrete-Event Semantics for Tools for Business Process Modeling in Web-Service Era

A business process itself is artificial, and is a discrete-event system. This paper uses the business transaction system, which is a multicomponent DEVS proposed by Sato and Praehofer, as a device to design business proc-esses in which web-service like software components are part of the processes.

The static model of business transaction system, called activity interaction diagram (AID, for short), plays a fundamental role in developing a unified framework for universal modeling language (UML), architecture of integrated information systems (ARIS) and event-driven process chain (EPC), and data flow diagram (DFD). The framework provides us with both the meaning of those tools and possibility of automatic transformation of the models described by any of them. Furthermore, it is a guideline for a modeling method to have a validated business process.

Ryo Sato

An Architecture Modelling of a Workflow Management System

This paper presents the design and implementation of a workflow management system by which one can determine a work process at runtime. Our workflow management system consists of a build-time part and a run-time part. The build-time part employs a process definition model that can support various types of work processes. A process definition is described in XML and converted to objects at runtime. In order to extend the function of activity, such as condition checking and invoked applications, we allow inserting real java code in XML process definition. The core of run-time part is a workflow engine that schedules and operates tasks of a work process according to a given process definition. Activities and transitions are designed as objects that have various execution modes, including simple manual or automatic modes.

Dugki Min, Eunmi Choi

Client Authentication Model Using Duplicated Authentication Server Systems

In the Internet and distributed systems, we can always access many application servers for gaining many information or electronic business processing, etc. Despite of those advantages of information technology, there have been also many security problems that many unauthorized users attack our network and computer systems for acquiring many information or destroying our resources. In this paper, we propose a client authentication model that uses two authentication server systems, duplicated authentication. Before a client requests information processing to application web servers, the user acquire session password from two authentication servers. The proposed client authentication model can be used making high quality of computer security using the two authentication procedures, user’s password and authentication password. The second password by two authentication servers is used in every request transaction without user’s input because of storing to client’s disc cache when a session is opened first. For more secure authentication we can close session between client and server if a request transaction is not created during a time interval. And then user will acquire authentication password again using logon to the authentication servers for requesting information processing. The client authentication procedure is needed to protect systems during user’s transaction by using duplicated password system. And we can detect intrusion during authorized client’s transaction using our two client authentication passwords because we can know immediately through stored client authentication password when a hackers attack our network or computer systems.

Jae-Woo Lee

Visualization, Graphics and Animation II

Dynamic Visualization of Signal Transduction Pathways from Database Information

The automatic generation of signal transduction pathways is challenging because it often yields complicated, non-planar diagrams with a large number of intersections. Most signal transduction pathways available in public databases are static images and thus cannot be refined or changed to reflect updated data. We have developed an algorithm for visualizing signal transduction pathways dynamically as three-dimensional layered digraphs. Experimental results show that the algorithm generates clear and aesthetically pleasing representations of large-scale signal-transduction pathways.

Donghoon Lee, Byoung-Hyun Ju, Kyungsook Han

Integrated Term Weighting, Visualization, and User Interface Development for Bioinformation Retrieval

This project implements an integrated biological information website that classifies technical documents, learns about users’ interests, and offers intuitive interactive visualization to navigate vast information spaces. The effective use of modern software engineering principles, system environments, and development approaches is demonstrated. Straightforward yet powerful document characterization strategies are illustrated, helpful visualization for effective knowledge transfer is shown, and current user interface methodologies are applied. A specific success of note is the collaboration of disparately skilled specialists to deliver a flexible integrated prototype in a rapid manner that meets user acceptance and performance goals. The domain chosen for the demonstration is breast cancer, using a corpus of abstracts from publications obtained online from Medline. The terms in the abstracts are extracted by word stemming and a stop list, and are encoded in vectors. A TF-IDF technique is implemented to calculate similarity scores between a set of documents and a query. Polysemy and synonyms are explicitly addressed. Groups of related and useful documents are identified using interactive visual displays such as a spiral graph that represents of the overall similarity of documents. K-means clustering of the similarities among a document set is used to display a 3-D relationship map. User identities are established and updated by observing the patterns of terms used in their queries, and from login site locations. Explicit considerations of changing user category profiles, site stakeholders, information modeling, and networked technologies are pointed out.

Min Hong, Anis Karimpour-Fard, Steve Russell, Lawrence Hunter

CONDOCS: A Concept-Based Document Categorization System Using Concept-Probability Vector with Thesaurus

Traditional approaches in document categorization use the term-based classification techniques to classify the documents. The techniques, for enormous terms, are not effective to the applications that need speedy response or not much space. This paper presents an effective concept-based document categorization system, which can efficiently classify Korean documents through the thesaurus tool. The thesaurus tool is the information extractor that acquires the meanings of document terms from the thesaurus. It supports effective document categorization with the acquired meanings. The system uses the concept-probability vector to represent the meanings of the terms. Because the category of the document depends on the meanings than the terms, even though the size of the vector is small, the system can classify the document without degradation of the performance. The system uses the small concept-probability vector so that it can save the time and space for document categorization. The experimental results suggest that the presented system with the thesaurus tool can effectively classify the documents. The results show that even though the system uses the contracted vector for document categorization, the performance of the system is not degraded.

Hyun-Kyu Kang, Jeong-Oog Lee, Heung Seok Jeon, Myeong-Cheol Ko, Doo Hyun Kim, Ryum-Duck Oh, Wonseog Kang

DEVS Modeling and Simulation

Using DEVS for Modeling and Simulation of Human Behaviour

Throughout the last decade, research in cognitive sciences has proven emotions to be playing a prominent role in human behaviour.

The military is being more and more interested in modelling human behaviour for simulation and training purposes.

The aim of our work is to, from models coming from physiology and psychology, give to such models an operative semantics in order to simulate the human behaviour. We propose a DEVS model of stress states as well as a model of physical tiredness.The latter models interact with the behavioural model within an architecture that we also present.

Mamadou Seck, Claudia Frydman, Norbert Giambiasi

Simulation Semantics for Min-Max DEVS Models

The representation of timing, a key element in modeling hardware behavior, is realized in hardware description languages including ADLIB-SABLE, Verilog, and VHDL, through delay constructs. In the real world, precise values for delays are very difficult, if not impossible, to obtain with certainty. The reasons include variations in the manufacturing process, temperature, voltage, and other environmental parameters. Consequently, simulations that employ precise delay values are susceptible to inaccurate results. This paper proposes an extension to the classical DEVS by introducing Min-Max delays. In the augmented formalism, termed Min-Max DEVS, the state of a hardware model may, in some time interval, become unknown and is represented by the symbol,


. The occurrence of


implies greater accuracy of the results, not lack of information. Min-Max DEVS offers a unique advantage, namely, the execution of a single simulation pass utilizing Min-Max delays is equivalent to multiple simulation passes, each corresponding to a set of precise delay values selected from the interval. This, in turn, poses a key challenge – efficient execution of the Min-Max DEVS simulator.

Maâmar El-Amine Hamri, Norbert Giambiasi, Claudia Frydman


Weitere Informationen

Premium Partner