Skip to main content
main-content

Über dieses Buch

This ?ve-volume set was compiled following the 2006 International Conference on Computational Science and its Applications, ICCSA 2006, held in Glasgow, UK, during May 8–11, 2006. It represents the outstanding collection of almost 664 refereed papers selected from over 2,450 submissions to ICCSA 2006. Computational science has ?rmly established itself as a vital part of many scienti?c investigations, a?ecting researchers and practitioners in areas ranging from applications such as aerospace and automotive, to emerging technologies such as bioinformatics and nanotechnologies, to core disciplines such as ma- ematics, physics, and chemistry. Due to the shear size of many challenges in computational science, the use of supercomputing, parallel processing, and - phisticated algorithms is inevitable and becomes a part of fundamental theore- cal research as well as endeavors in emerging ?elds. Together, these far-reaching scienti?c areas contributed to shaping this conference in the realms of state-- the-art computational science researchand applications, encompassing the fac- itating theoretical foundations and the innovative applications of such results in other areas.

Inhaltsverzeichnis

Frontmatter

Workshop on Approaches or Methods of Security Engineering (AMSE 2006, Sess. A)

A Security Requirement Management Database Based on ISO/IEC 15408

With the scale-spreading and diversification of information systems, security requirements for the systems are being more and more complicated. It is desirable to apply database technologies to information security engineering in order to manage the security requirements in design and development of the systems. This paper proposes a security requirement management database based on the international standard ISO/IEC 15408 that defines security functional requirements which should be satisfied by various information systems. The database can aid design and development of information systems that require high security such that it enables to suitably refer to required data of security requirements.

Shoichi Morimoto, Daisuke Horie, Jingde Cheng

Development of Committee Neural Network for Computer Access Security System

A computer access security system, a reliable way of preventing unauthorized people for accessing, changing or deleting, and stealing the information, needed to be developed and implemented. In the present study, a neural network based system is proposed for computer access security for the issues of preventive security and detection of violation. Two types of data, time intervals between successive keystrokes during password entry through keyboard and voice patterns spoken via a microphone, are considered to deal with a situation of multiple users where each user has a certain password with different length. For each type of data, several multi-layered neural networks are designed and evaluated in terms of recognition accuracy. A committee neural network is formed consisting of six multi-layered neural networks. The committee decision was based on majority voting of the member networks. The committee neural network performance was better than the neural networks trained separately.

A. Sermet Anagun

C-TOBI-Based Pitch Accent Prediction Using Maximum-Entropy Model

We model Chinese pitch accent prediction as a classification problem with six C-ToBI pitch accent types, and apply conditional Maximum Entropy (ME) classification to this problem. We acquire multiple levels of linguistic knowledge from natural language processing to make well-integrated features for ME framework. Five kinds of features were used to represent various linguistic constraints including phonetic features, POS tag features, phrase break features, position features, and length features.

Byeongchang Kim, Gary Geunbae Lee

Design and Fabrication of Security and Home Automation System

Home automation system was designed and fabricated for controlling of home appliances, gas detection and home security. The fabricated system could detect the intruder using infrared sensor and monitor the room or office in real time by web camera. The password was needed to enter the house or office and the operation of home appliances could be remotely controlled by network, too. This fabricated system was small and had an advantage to supplement additional function easily.

Eung Soo Kim, Min Sung Kim

PGNIDS(Pattern-Graph Based Network Intrusion Detection System) Design

PGNIDS(Pattern-Graph based Network Intrusion Detection System) generates the audit data that can estimate intrusion with the packets collected from network. An existing IDS(Intrusion Detection System), when it estimates an intrusion by reading all the incoming packets in network, takes more time than the proposed PGNIDS does. As this proposed PGNIDS not only classifies the audit data into alert and log through ADGM(Audit Data Generation Module) and stores them in the database, but also estimates the intrusion by using pattern graph that classifies IDPM(Intrusion Detection Pattern Module) and event type, Therefore, it takes less time to collect packets and analyze them than the existing IDS, and reacts about abnormal intrusion real time. In addition, it is possible for this to detect the devious intrusion detection by generating pattern graph.

Byung-kwan Lee, Seung-hae Yang, Dong-Hyuck Kwon, Dai-Youn Kim

Experiments and Hardware Countermeasures on Power Analysis Attacks

Security is a concern in the design of smartcards. It is possible to leak much side channel information related to secret key when cryptographic algorithm runs on smartcards. Power analysis attacks are a very strong cryptanalysis by monitoring and analyzing power consumption traces. In this paper, we experiment Exclusive OR operation. We also analyze the tendency of state-of-the-art regarding hardware countermeasures and experiments of Hamming-Weights on power attacks. It can be useful to evaluate a cryptosystem related with hardware security technology.

ManKi Ahn, HoonJae Lee

Information System Modeling for Analysis of Propagation Effects and Levels of Damage

The number of newly developed information systems has grown considerably in their areas of application, and their concomitant threats of intrusions for the systems over the Internet have increased, too. To reduce the possibilities of such threats, studies on security risk analysis in the field of information security technology have been actively conducted. However, it is very difficult to analyze actual causes of damage or to establish safeguards when intrusions on systems take place within the structure of different assets and complicated networks. Therefore, it is essential that comprehensive preventive measures against intrusions are established in advance through security risk analysis. Vulnerabilities and threats are increasing continuously, while safeguards against these risks are generally only realized some time after damage through an intrusion has occurred. Therefore, it is vital that the propagation effects and levels of damage are analyzed using real-time comprehensive methods in order to predict damage in advance and minimize the extent of the damage. For this reason we propose a modeling technique for information systems by making use of SPICE and Petri-Net, and methods for analyzing the propagation effects and levels of damage based on the epidemic model.

InJung Kim, YoonJung Chung, YoungGyo Lee, Eul Gyu Im, Dongho Won

A Belt-Zone Method for Decreasing Control Messages in Ad Hoc Networks

MANET(Mobile Ad Hoc Network) is the composite technology of mutual wireless connections of nodes in mobile networks. In AODV(Ad hoc On-demand Distance Vector) Routing, all the nodes have to receive route request messages, and rebroadcast the route request messages for others. It causes a lot of network traffic overheads. In this study, we propose a belt-zone selection method for decreasing the number of RREQ messages. All nodes that receive the RREQ message don’t rebroadcast, but only the nodes within a selected zone rebroadcast it for setting a routing path. The belt-zone a logical concentric area is decided by the signal strength from RREQ sender. We also provide an efficient belt-zone selecting sequence by simulations. In high density networks, an intermediate area within several logical concentric areas from the sender must be selected as the primary area. But, in low density networks, the far area from the sender is better to be selected as the primary belt-zone area. By applying the proposed belt-zone mechanism to AODV networks, a lot of control messages can be decreased on various load situations.

Youngrag Kim, JaeYoun Jung, Seunghwan Lee, Chonggun Kim

A VLSM Address Management Method for Variable IP Subnetting

IPv6 have been examining at the next IP address standard. But IPv4 have to be used for a while by the following reasons: tremendous cost and efforts for converting to IPv6. One of the serious problems of the IPv4 addressing structure is the fact that is a shortage of IP addresses. The address shortage is derived by lots of unused addresses during IP distribution and IP subnetting design. We propose an effective subnet IP address calculation method on VLSM. Also, with the proposed subnet IP address management method, a web based subnet address management system is introduced. The web-based subnet IP management system offers convenience in VLSM-based subnetting. The proposed VLSM calculation method can give a simple and effective IP management.

SeongKwon Cheon, DongXue Jin, ChongGun Kim

SDSEM: Software Development Success Evolution Model

Each company of organization should develop its own model or tailor the above models to make them suitable to its unique environment such as product or technology domain, scale of business or organization and cultural environment, etc for the practical application. In this paper, we introduces a case in which organizational and technical capability was reinforced based on our own process capability improvement model which is named SDSEM (S/W Competence Reinforcement Model) to improve S/W development strength in a corporate, which manufactures varieties of consumer electronics products which are embedding controller S/W as its brain and in which large-scale development organization has multi-site development environments.

We evaluated SDSEM as a very practical but limited model against our goal by introducing and applying to business units.

Haeng-Kon Kim, Sang-Yong Byun

A Robust Routing Protocol by a Substitute Local Path in Ad Hoc Networks

Ad hoc wireless networks consist of mobile nodes in an area without any centralized access point or existing infrastructure. The network topology changes frequently due to nodes’ migrations, signal interferences and power outages. One of the ad hoc network routing protocols is the on-demand routing protocol that establishes a route to a destination node only when it is required by a source node. The overhead of maintenance is low, but it is necessary to reestablish a new route when the routing path breaks down. Several recent papers have studied about ad hoc network routing protocols avoiding the disconnection of the existing route. Each active node on the routing path detects the danger of a link breakage to a downstream node and try to reestablish a new route. The node detects a link problem, try to find a substitute local path before the route breaks down. If the local reestablishment process fails, the route breaks down. In this paper, a robust routing protocol is proposed to increase the success rate of the local route reestablishment and enhance the communication delay performance. The results of computer simulation show that the proposed routing protocol increases system performance.

Mary Wu, SangJoon Jung, Seunghwan Lee, Chonggun Kim

Power Efficient Wireless LAN Using 16-State Trellis-Coded Modulation for Infrared Communications

Optical wireless communication employing 16-state trellis-coded multiple-subcarrier modulation with a fixed bias is introduced, where 4-dimensional vectors having the largest Euclidean distance for 32 signal points are used. After combining coding with the 4-dimensional modulation, the proposed system improves the power and the bandwidth efficiency, significantly. The performance evaluation results are that the normalized power and the bandwidth requirements are reduced up to 6.1 dB and 5.8 dB compared to those of the block codes using QPSK, respectively. The proposed system needs much less power and bandwidth than the counterparts transmitting the same bit rates for optical wireless connection.

Hae Geun Kim

The Design and Implementation of Real-Time Environment Monitoring Systems Based on Wireless Sensor Networks

This research focuses on the implementation of a real-time environment monitoring system for environment detection using wireless sensor networks. The purpose of our research is to construct the system on the real-time environment with the technology of environment monitoring systems and ubiquitous computing systems. Also, we present the monitoring system to provide a faster solution to prevent disasters through automatic machine controls in urgent situations. As the purpose of this study, we constructed simulation nodes with wireless sensor network devices and implemented a real-time monitoring system.

Kyung-Hoon Jung, Seok-Cheol Lee, Hyun-Suk Hwang, Chang-Soo Kim

Ontology-Based Information Search in the Real World Using Web Services

The ontology is an essential component to demonstrate the semantic Web, which is being described as the next Web generation. A semantic information search based on the ontology can provide the inferred and associated information between data. To develop an ontology application in the real world, we design the architecture of search systems based on the ontology using Web services. Our system consists of ontology modules, search procedure modules for searching, RDQL generator modules, and client modules for user interfaces. Also, we construct a hotel ontology integrated with the related terms and implement a search example with the defined ontology.

Hyun-Suk Hwang, Kyoo-Seok Park, Chang-Soo Kim

An Active Node Set Maintenance Scheme for Distributed Sensor Networks

In this paper, we propose an energy-efficient coverage maintenance scheme for prolonging the lifetime of the sensor networks. Researchers are actively exploring advanced power conservation approaches for wireless sensor network and probabilistic approaches among them are preferred due to advantages of simplicity. However, these probabilistic approaches have limited usefulness because they can not ensure full area coverage. In the proposed scheme, each node computes the probability through the densities of its own coverage and keeps it adaptively to the report history. The performance of proposed scheme is investigated via computer simulations. Simulation results show that the proposed scheme is very simple nevertheless efficient to save the energy.

Tae-Young Byun, Minsu Kim, Sungho Hwang, Sung-Eok Jeon

Intelligent Information Search Mechanism Using Filtering and NFC Based on Multi-agents in the Distributed Environment

Intelligent search agent is popularly used for searching relevant information in the Internet and there are lots of tools that are used to satisfy the needs of the users. Since there is no sufficient cooperation among the agents and they are independent to each other, it is difficult to make an efficient search of information in the distributed environment. Therefore, a typical search agent is difficult to use and can contain irrelevant information for the users. To solve these problems, we use the CORBA architecture to create an agency in the broker agent and provide more reliable information to the users. Also, the proposed intelligent information search system use the NFC and filtering techniques through the multi-agents for fast and reliable search of information.

Subong Yi, Bobby D. Gerardo, Young-Seok Lee, Jaewan Lee

Network Anomaly Behavior Detection Using an Adaptive Multiplex Detector

Due to the diversified threat elements of resources and information in computer network system, the research on a biological immune system is becoming one way for network security. Inspired by adaptive immune system principles of artificial immune system, we proposed an anomaly detection algorithm using a multiplex detector. In this algorithm, the multiplex detector is created by applying negative selection, positive selection and clonal selection to detect anomaly behaviors in network. Also the multiplex detector gives an effective method and dynamic detection. In this paper, the detectors are classified by K-detector, memory detector, B-detector, and T-detector for achieving multi level detection. We apply this algorithm in intrusion detection and, to be sure, it has a good performance.

Misun Kim, Minsoo Kim, JaeHyun Seo

Applying Product Line to the Embedded Systems

For software intensive systems, a reuse-driven product line approach will potentially reduce time-to-market, and improve product quality while reducing uncertainty on cost and sc11edule estimates. Product lines raise reuse to the level of design frameworks, not simply code or component reuse. They capture commonality and adaptability, through domain and variability analyzes, to be able to create new products easily by instantiating prefabricated components, adapting their design parameters, and leveraging from established testing suites. In this paper, we examine software technology and infrastructure (process) supporting product lines more directly to embedded systems. We also present evaluation criteria for the development of a product line and give an overview of the current state of practices in the embedded software area. A product line architecture that brings about a balance between sub-domains and their most important properties is an investment that must be looked after. However, the sub-domains need flexibility to use, change and manage their own technologies, and evolve separately, but in a controlled way.

Haeng-Kon Kim

Enhanced Fuzzy Single Layer Learning Algorithm Using Automatic Tuning of Threshold

In this paper, we proposed an enhanced fuzzy single layer learning algorithm using the dynamic adjustment of threshold. For performance evaluation, the proposed method was applied to the XOR problem, which is used as a benchmark in the field of pattern recognition, and the recognition of digital image in a practical image processing application. As a result of experiment, though the method does not always guarantee the convergence, it shows the improved learning time and the high convergence rate.

Kwang-Baek Kim, Byung-Kwan Lee, Soon-Ho Kim

Optimization of Location Management in the Distributed Location-Based Services Using Collaborative Agents

Determining mobility patterns and predicting the future mobile location are some issues in optimizing the location management of location based services. These patterns are also useful for finding trends, allocating resources and other information with the mobile user. In our paper, a location management on location based services using collaborative agents is presented. In this study, we propose an agent which learns from the pattern of the user mobility and predict the future location of mobile object to optimize the search method of the location management. We use the association rule mining and basing on these rules, the agent predicts the future mobile location. Also, these agents coordinate to each other by telling the knowledge of the possible location of the mobile object on the distributed location-based services. The result using the technique optimizes the search method of the location management.

Romeo Mark A. Mateo, Jaewan Lee, Hyunho Yang

Design of H.264/AVC-Based Software Decoder for Mobile Phone

Investigations of the video service technology used for mobile phone have been actively performed recently. The efforts on the application of all the technologies available for commercialization of the framework of wire-internet into mobile environment have been also researched, thanks to the speedy development of a mobile phone and wireless network platform. The video service related technology based on the mobile phone has been implemented and operated on the basis of hardware.

However, Current service mode, which is based on hardware, has its own disadvantage of not coping flexibly with various changes in the control structure of the new video codec algorithm and video data communication. Accordingly it is necessary to develop a decoder, which can deal with video service technology embodied in the form of hardware into that of software. This requires us to develop a new video player. In addition, such a decoder can achieve an immediate response due to further technology development including video data transmission and traffic control without consuming additional resources, in comparison with a hardware decoder chip technology. It can greatly contribute to an improvement in the manufacturing cost arising from incorporating an additional chip into mobile phones, as well as the reduction of resources through recycling. This paper designed a H.264/AVC video software-based decoder that uses component base design at the WIPI platform.

Hyung-Su Jeon, Hye-Min Noh, Cheol-Jung Yoo, Ok-Bae Chang

Transforming a Legacy System into Components

Most legacy systems are being pressured to continuously respond to changing requirements, but it is impossible almost to cope with these requests effectively. Because many legacy systems have suffered from lack of standardization and openness, difficulty of change, and absence of distributed architecture. Especially, according as legacy system has been deteriorating from an architectural point of view over the years, we must continually maintain these legacy systems at high cost for applying new technologies and extending their business requirements. For the purposes of transforming a legacy system into component system, we need systematic methodologies and concrete guidelines. Through these, we can share information at different levels of abstraction ranging from code to software architecture, and construct the component system with better component-based architecture.

To achieve these goals, we have built upon the L2CBD (Legacy to Component Based Development) methodology providing reengineering process including concrete procedures, product-works, guidelines and considerations. We can transform legacy systems into new component system with improved software architecture by adapting L2CBD.

Haeng-Kon Kim, Youn-Ky Chung

Pseudorandom Number Generator Using Optimal Normal Basis

This paper proposes a simple pseudorandom number generator [PRNG] by using optimal normal basis. It is well known that the squaring and multiplication in finite field with optimal normal basis is very fast and the basis can be transformed to a canonical form. The suggested PRNG algorithm combines typical multiplications and exclusive-or bit operations, both operations can be easily implemented. It is shown that the algorithm passes all terms of the Diehard and the ENT tests for long sequences. This algorithm can be applied in various applications such as financial cryptography.

Injoo Jang, Hyeong Seon Yoo

Efficient Nonce-Based Authentication Scheme Using Token-Update

In this paper an efficient token-update scheme based on nonce is proposed. This scheme provides an enhancement, resolving some problems with regard to Lee’s scheme, which cannot defend against and replay and impersonation attacks. Accordingly, an analysis and comparison with Lee’s and other schemes, demonstrate that the current paper avoids replay and impersonation attacks, providing mutual authentication, and also results in a lower computation cost than the original scheme.

Wenbo Shi, Hyeong Seon Yoo

An Efficient Management of Network Traffic Performance Using Framework-Based Performance Management Tool

As the network-related technology develops the number of both internet users and the usage are explosively increasing. The networking traffic is increasing in the campus as the networking system inside universities, following the trend, adds more nodes and various networking services. Nonetheless, the quality of services for users has been degraded. Accordingly, core problems, which can cause troubles for network management, design and expansion of the network, and the cost policy, have developed. To effectively cope with the problems an analysis of a great number of technicians, tools, and budget are needed. However, it is not possible for mid and small-sized colleges to spend such a high expenditure for professional consulting. To reduce the cost and investment of creating an optimized environment, analysis of the replacement of the tools, changing the network structure, and performance analysis about capacity planning of networking is necessary. For this reason, in this paper, framework-based performance management tools are used for all steps that are related to the subject of the analysis for the network management. As the major research method, the current data in detailed categories are collected, processed, and analyzed to provide a meaningful solution to the problems. As a result we will be able to manage the network, server, and application more systematically and react efficiently to errors and degrading of performance that affect the networking tasks. Also, with the scientific and organized analyses the overall efficiency is upgraded by optimizing the cost for managing the operation of entire system.

Seong-Man Choi, Cheol-Jung Yoo, Ok-Bae Chang

A Prediction Method of Network Traffic Using Time Series Models

This paper describes a method to derive an appropriate prediction model for network traffic and verify its trustfulness. The proposal is not only an analysis of network packets but also finding a prediction method for the number of packets. We use time series prediction models and evaluate whether the model can predict network traffic exactly or not. In order to predict network packets in a certain time, the AR, MA, ARMA, and ARIMA model are applied. Our purpose is to find the most suitable model which can express the nature of future traffic among these models. We evaluate whether the models satisfy the stationary assumption for network traffic. The stationary assumption is obtained by using ACF(Auto Correlation Function) and PACF(Partial Auto Correlation Function) using a suitable significance. As the result, when network traffic is classified on a daily basis, the AR model is a good method to predict network packets exactly. The proposed prediction method can be used on a routing protocol as a decision factor for managing traffic data dynamically in a network.

Sangjoon Jung, Chonggun Kim, Younky Chung

An Obstacle Avoidence Method for Chaotic Robots Using Angular Degree Limitions

This paper presents a method to avoid obstacles that have unstable limit cycles in a chaos trajectory surface using angular degree limits. It is assumed that all obstacles in the chaos trajectory surface have a Van der Pol equation with an unstable limit cycle. When a chaos robot meets an obstacle in a Lorenz, Hamilton and Hyper-chaos equation trajectory that exceed the defined angular degree limits, the obstacle repulses the robot. Computer simulation of the Lorenz equation and the Hamilton and hyper-chaos equation trajectories, with one or more Van der Pol equations as the obstacle(s) is performed and the proposed method is verified through simulation of the chaotic trajectories in any plane, which avoids the obstacle when it is found, where the target is either met or within close range.

Youngchul Bae, MalRey Lee, Thomas M. Gatton

Intersection Simulation System Based on Traffic Flow Control Framework

This paper proposes the Intersection Simulation System which can dynamically manage the real-time traffic flow for each section from the intersections by referring to the traffic information database. This system consists of the hierarchical 3 layers. The lower layer is the physical layer where the traffic information is acquired on an actual road. The middle layer, which is the core of this system, lays the Traffic Flow Control Framework designed by extending the distributed object group framework that we developed before. This layer supports the grouping of intersection, the collection of real-time traffic flow information, and the remote monitoring and control by using the traffic information of the lower layer. In upper layer, the intersection simulator applications controlling the traffic flow by grouping the intersections exist. The key idea of our study is the grouping concept that can manage the physical world in the defined logical space. The intersection simulation system considers each intersection on road as an application group, and can apply the control models of traffic flow by the current traffic status, dynamically. For constructing this system, we defined the system architecture and the interaction of components on the traffic flow control framework, and designed the application simulator and the user interface for monitoring and controlling of traffic flow.

Chang-Sun Shin, Dong-In Ahn, Hyun Yoe, Su-Chong Joo

A HIICA(Highly-Improved Intra CA) Design for M-Commerce

HIICA(Highly-improved Intra CA) proposed in this paper is internal CA which highly improved by using OCSP(Online Certificate Status Protocol)and LDAP(Lightweight Directory Access Protocol). It provides more security and confidentiality than external CA. As HIICA is provided with certification policy from PCA(Policy Certificate Authority) and sets an special environment like subscriber registration process, it reduces additional expense and strengthens the security at the certificate request process through HIICA agent. Therefore, secure electronic payment system is optimized through HIICA design that removed the existing CA complex process and unsecure elements.

Byung-kwan Lee, Chang-min Kim, Dae-won Shin, Seung-hae Yang

Highly Reliable Synchronous Stream Cipher System for Link Encryption

A highly reliable synchronous stream cipher system with absolute synchronization is proposed. The proposed system includes an improved initial synchronization with highly reliable keystream synchronization in a noisy channel (about BER=0.1). Furthermore, encryption algorithms with a LILI-II, a Dragon and a Parallel LM for data confidentiality, and a zero-suppression algorithm for system stabilization are all designed and analyzed.

HoonJae Lee

Recognition of Concrete Surface Cracks Using ART2-Based Radial Basis Function Neural Network

In this paper, we proposed the image processing techniques for extracting the cracks in a concrete surface crack image and the ART2-based radial basis function neural network for recognizing the directions of the extracted cracks. The image processing techniques used are the closing operation of morphological techniques, the Sobel masking used to extract edges of the cracks, and the iterated binarization for acquiring the binarized image from the crack image. The cracks are extracted from the concrete surface image after applying two times of noise reduction to the binarized image. We proposed the method for automatically recognizing the directions (horizontal, vertical, -45 degree, 45 direction degree) of the cracks with the ART2-based RBF(Radial Basis Function) neural network. The proposed ART2-based RBF neural network applied ART2 to the learning between the input layer and the middle layer and the Delta learning method to the learning between the middle layer and the output layer. The experiments using real concrete crack images showed that the cracks in the concrete crack images were effectively extracted and the proposed ART2-based RBF neural network was effective in the recognition of the extracted cracks directions.

Kwang-Baek Kim, Hwang-Kyu Yang, Sang-Ho Ahn

Hybrid Image Mosaic Construction Using the Hierarchical Method

This paper proposes a tree-based hierarchical image mosaicing system using camera and object parameters for efficient video database construction. Gray level histogram difference and average intensity difference are proposed for scene change detection of input video. Camera parameter measured by utilizing least sum of square difference and affine model, and difference image is used for similarity measure of two input images. Also, dynamic objects are searched through macro block setting and extracted by using region splitting and 4-split detection methods. Dynamic trajectory evaluation function is used for expression of dynamic objects, and blurring is performed for construction of soft and slow mosaic image.

Oh-Hyung Kang, Ji-Hyun Lee, Yang-Won Rhee

Workshop on Applied Cryptography and Information Security (ACIS 2006)

Public Key Encryption with Keyword Search Based on K-Resilient IBE

An encrypted email is sent from Bob to Alice. A gateway wants to check whether a certain keyword exists in an email or not for some reason (e.g. routing). Nevertheless Alice does not want the email to be decrypted by anyone except her including the gateway itself. This is a scenario where public key encryption with keyword search (PEKS) is needed. In this paper we construct a new scheme (KR-PEKS) the K-Resilient Public Key Encryption with Keyword Search. The new scheme is secure under a chosen keyword attack without the random oracle. The ability of constructing a Public Key Encryption with Keyword Search from an Identity Based Encryption was used in the construction of the KR-PEKS. The security of the new scheme was proved by showing that the used IBE has a notion of key privacy. The scheme was then modified in two different ways in order to fulfill each of the following: the first modification was done to enable multiple keyword search and the other was done to remove the need of secure channels.

Dalia Khader

A Generic Construction of Secure Signatures Without Random Oracles

We show how to construct an existentially unforgeable secure signature scheme from any scheme satisfies only a weak notion of security in the standard model. This construction method combines a weakly secure signature and a one-time signature. However, key generation of the resulted fully secure signature is the same as the key generation of weak signature. Therefore the length of the public key in our fully secure signature is independent of that of the one-time signature. Our conversion from a weakly secure signature scheme to an existentially unforgeable secure signature scheme is simple, efficient and provably secure in the standard model (that is, security of the resulting scheme does not rely on the random oracle model). Our results yield a new construction of existentially unforgeable secure signature in the standard model. Furthermore, we show two efficient instantiations without random oracles converted from two previous weakly secure signature schemes.

Jin Li, Yuen-Yan Chan, Yanming Wang

A Separation Between Selective and Full-Identity Security Notions for Identity-Based Encryption

Identity-based encryption has attracted a lot of attention since the publication of the scheme by Boneh and Franklin. In this work we compare the two adversarial models previously considered in the literature, namely the full and selective-identity models. Remarkably, we show that the strongest security level with respect to selective-identity attacks (i.e. chosen-ciphertext security) fails to imply the weakest full-identity security level (i.e. one-wayness). In addition, an analogous result for the related primitive of tag-based encryption is presented.

David Galindo

Traceable Signature: Better Efficiency and Beyond

In recent years one of the most active research areas in applied cryptography is the study of techniques for creating a group signature, a cryptographic primitive that can be used to implement anonymous authentication. Some variants of group signature, such as traceable signature, and authentication with variable anonymity in a trusted computing platform, have also been proposed. In this paper we propose a traceable signature scheme with variable anonymity. Our scheme supports two important properties for a practical anonymous authentication system, i.e., corrupted group member detection and fair tracing, which have unfortunately been neglected in most group signature schemes in the literature. We prove the new scheme is secure in the random oracle model, under the strong RSA assumption and the decisional Diffie-Hellman assumption.

He Ge, Stephen R. Tate

On the TYS Signature Scheme

This paper analyzes the security of a very efficient signature scheme proposed by C.H. Tan, X. Yi and C.K. Siew: the TYS signature scheme. We show that this scheme is universally forgeable; more specifically, we show that anyone is able to produce a valid TYS signature on a chosen message from an arbitrary valid message/signature pair. We also suggest modifications to the TYS signature scheme and relate the resulting scheme to the Camenisch-Lysyanskaya signature scheme.

Marc Joye, Hung-Mei Lin

Efficient Partially Blind Signatures with Provable Security

Blind signatures play a central role in applications such as e-cash and e-voting systems. The notion of partially blind signature is a more applicable variant such that the part of the message contains some common information pre-agreed by the signer and the signature requester in an unblinded form. In this paper, we propose two efficient partially blind signatures with provable security in the random oracle model. The former is based on witness indistinguishable (WI) signatures. Compared with the state-of-the-art construction due to Abe and Fujisaki [1], our scheme is 25% more efficient while enjoys the same level of security. The latter is a partially blind Schnorr signature without relying on witness indistinguishability. It enjoys the same level of security and efficiency as the underlying blind signature.

Qianhong Wu, Willy Susilo, Yi Mu, Fanguo Zhang

A Framework for Robust Group Key Agreement

Considering a protocol of Tseng, we show that a group key agreement protocol that resists attacks by malicious insiders in the authenticated broadcast model, loses this security when it is transfered into an unauthenticated point-to-point network with the protocol compiler introduced by Katz and Yung. We develop a protocol framework that allows to transform passively secure protocols into protocols that provide security against malicious insiders and active adversaries in an unauthenticated point-to-point network and, in contrast to existing protocol compilers, does not increase the number of rounds. Our protocol particularly uses the session identifier to achieve the security. By applying the framework to the Burmester-Desmedt protocol we obtain a new 2 round protocol that is provably secure against active adversaries and malicious participants.

Jens-Matthias Bohli

BGN Authentication and Its Extension to Convey Message Commitments

We instantiate the cryptosystem proposed by Boneh, Goh, and Nissim in TCC’05 [5] into an entity authentication scheme, in which an entity is authenticated by an interactive zero-knowledge proof on its private key. Completeness and soundness of our scheme is supported by the indistinguishability of BGN ciphertexts of sums and products, which essentially relies on the semantic security of the BGN cryptosystem. We further extend our scheme so that the authentication conveys Pedersen commitments on a message, while the BGN authentication serves the ‘proving you know how to open’ functionality for the commitment. Our message commitment scheme is both statistically hiding and computationally binding provided the subgroup decision problem is hard.

Yuen-Yan Chan, Jin Li

New Security Problem in RFID Systems “Tag Killing”

Radio frequency identification systems based on low-cost computing devices is the new plaything that every company would like to adopt. The biggest challenge for RFID technology is to provide benefits without threatening the privacy of consumers. Using cryptographic primitives to thwart RFID security problems is an approach which has been explored for several years. In this paper, we introduce a new security problem called as “Tag Killing” which aims to wipe out the functioning of the system, e.g., denial of service attacks. We analyze several well-known RFID protocols which are considered as good solutions with “Tag Killing” adversary model and we show that most of them have weaknesses and are vulnerable to it.

Dong-Guk Han, Tsuyoshi Takagi, Ho Won Kim, Kyo Il Chung

A Model for Security Vulnerability Pattern

Static analysis technology is used to find programming errors before run time. Unlike dynamic analysis technique which looks at the application state while it is being executed, static analysis technique does not require the application to be executed. In this paper, we classify security vulnerability patterns in source code and design a model to express various security vulnerability patterns by making use of pushdown automata. On the basis of the model, it is possible to find a security vulnerability by making use of Abstract Syntax Tree (AST) based pattern matching technique in parsing level.

Hyungwoo Kang, Kibom Kim, Soonjwa Hong, Dong Hoon Lee

A New Timestamping Scheme Based on Skip Lists

Time stamping is a cryptographic technique providing us with a proof-of-existence of a message/document at a given time. Several times-tamping schemes have already been proposed [1-10]. In this paper, we first define a new timestamping scheme which is based on skip lists [11]. Then, we show that our scheme offers nice properties and optimal performances.

Kaouthar Blibech, Alban Gabillon

A Semi-fragile Watermarking Scheme Based on SVD and VQ Techniques

Semi-fragile watermarking technique is effective for image authentication which is an important issue for safeguarding the integrity of digital images. It can be used to detect tampered regions and allow general image processing operation. In this paper, a novel semi-fragile watermarking technique based on SVD and VQ is proposed. The singular value can be against the image processing operation, and the coefficients of the matrix U and matrix V can represent the image feature. VQ is used to reduce the number of the image feature, and the VQ index is recorded to be the authentication criterion. The host image is transformed by SVD and image features are extracted from the SVD coefficients. VQ is then applied on these features where indices are obtained and embedded into the significant singular values. Experimental results show that the proposed scheme can locate the tampered regions and allow for JPEG lossy compression.

Hsien-Chu Wu, Chuan-Po Yeh, Chwei-Shyong Tsai

New Constructions of Universal Hash Functions Based on Function Sums

In this paper, we propose a generalization of the SQUARE hash function family to the function sum hash, which is based on functions with low maximal differential over arbitrary Abelian groups. These new variants allow the designer to construct SQUARE-like hash functions on different platforms for efficient and secure message authentication. A variant using functions with low algebraic degree over a finite field is also proposed which enables the user to use a shorter key. For more versatility, we also propose a trade-off between the hash key length and security bound. Finally, we show that we can use an SPN structure in the function sum hash to construct a provably secure MAC with performance which is several times faster than the traditional CBC-MAC. Moreover, there are implementation advantages like parallelizability to increase the speed further and re-use of cipher components which help save on implementation resources.

Khoongming Khoo, Swee-Huay Heng

Analysis of Fast Blockcipher-Based Hash Functions

An important property of a hash function is the performance. We study fast iterated hash functions based on block ciphers. These hash functions and their compression functions are analyzed in the standard black-box model. We show an upper bound on rate of any collision resistant hash function. In addition, we improve known bound on the rate of collision resistant compression functions.

Martin Stanek

Application of LFSRs for Parallel Sequence Generation in Cryptologic Algorithms

We consider the problem of efficiently generating sequences in hardware for use in certain cryptographic algorithms. The conventional method of doing this is to use a counter. We show that sequences generated by linear feedback shift registers (LFSRs) can be tailored to suit the appropriate algorithms. For hardware implementation, this reduces both time and chip area. As a result, we are able to suggest improvements to the design of DES Cracker built by the Electronic Frontier Foundation in 1998; provide an efficient strategy for generating start points in time-memory trade/off attacks; and present an improved parallel hardware implementation of a variant of the counter mode of operation of a block cipher.

Sourav Mukhopadhyay, Palash Sarkar

Provable Security for an RC6-like Structure and a MISTY-FO-like Structure Against Differential Cryptanalysis

In this paper we introduce two new block cipher structures, named

RC6-like structure

and

MISTY-FO-like structure

, and show that these structures are provably resistant against differential attack. The main results of this paper are that the 5-round differential probabilities of these structures are upperbounded by

p

4

+2

p

5

and

p

4

, respectively, if the maximum differential probability of a round function is

p

. We also discuss a provable security for the RC6-like structure against LC. Our results are attained under the assumption that all of components in our proposed structures are bijective.

Changhoon Lee, Jongsung Kim, Jaechul Sung, Seokhie Hong, Sangjin Lee

Design and Implementation of an FPGA-Based 1.452-Gbps Non-pipelined AES Architecture

This work reports a non-pipelined AES (Advanced Encrypted Standard) FPGA (Field Programmable Gate Array) architecture, with low resource requirements. The architecture is designed to work on CBC (Cipher Block Chaining) mode and achieves a throughput of 1.45 Gbps. This implementation is a module of a configuration library for a Cryptographic Reconfigurable Platform (CRP).

Ignacio Algredo-Badillo, Claudia Feregrino-Uribe, René Cumplido

Workshop on Internet Communications Security (WICS 2006)

Security Weaknesses in Two Proxy Signature Schemes

Allowing a proxy signer to generate a signature on behalf of an original signer, a proxy signature should satisfy the property of strong unforgeability: anyone except the designated proxy signer cannot create a valid proxy signature on behalf of the original signer. Since proxy signatures, as well as their derivatives, can be used in many applications in reality, such as secure mobile agent, e-commerce systems and etc., they have been receiving extensive research recently. In this paper, we show that the proxy signature scheme [14] from ISPA’04 will suffer from the original signer’s forgery attack if the original signer once gets a valid proxy signature on a message, and a similar attack arises in the proxy signature scheme [1] from AWCC’04 if the verifier does not check the originality of the proxy signer’s proxy public key before verifying a proxy signature. Therefore, in some degree, neither of these two schemes meets the property of strong unforgeability.

Jiqiang Lu

A Proposal of Extension of FMS-Based Mechanism to Find Attack Paths

With the increase of internet service providers(companies) for the rapidly growing numbers of internet users in recent years, malicious attackers has been growing too. Due to these attacks, corporate image can be impaired significantly by such damages as increditable service quality and unstable service, which can lead to fatal flaws. Among the malicious attacks, DoS(Denial-of-Service) is the most damaging and frequently reported form of internet attacks. Because DoS attacks employ IP spoofing to disguise the IP and hide the identity of the attacker’s location, the correct address of attacker is not traceable only with the source IP address of packets received from damaged systems. Effective measures for the DoS attacks are not developed yet and even if defence is made for this attacks practically it is possible to repeatedly undergo attacks by the same attackers. In this point of view, in order to provide an effective countermeasure this study proposes mechanism to find out attack source by tracing the attack path using marking algorithms and then finding MAC address of attack source. In addition this study proposes technique to improve the packet arrival rate in marking algorithm and presents more effective measure with better performance to find attackers by enabling more prompt trace of the attack location

Byung-Ryong Kim, Ki-Chang Kim

Comparative Analysis of IPv6 VPN Transition in NEMO Environments

IPv6 deployments use different NGtrans mechanisms, depending on the network situation. Therefore, a mobile network should be able to initiate secure connectivity with the corresponding home network. The correlation between transition mechanisms used in existing network and transition mechanisms supported by the mobile network, is an important factor in secure connectivity with a corresponding home network. In this paper, VPN scenarios applicable to mobile networks during the transition to IPv6, have been analyzed. In addition, performance costs have also been evaluated, in order to establish connectivity according to VPN models with each transition phase. In addition, this paper analyzes factors affecting performance, through numeric analysis for NEMO VPN model, under an IPv6 transition environment. As shown in our study, a NEMO VPN creating hierarchical or sequential tunnels should be proposed after careful evaluation of not only security vulnerabilities but also performance requirements.

Hyung-Jin Lim, Dong-Young Lee, Tai-Myoung Chung

A Short-Lived Key Selection Approach to Authenticate Data Origin of Multimedia Stream

This paper presents an efficient approach to provide data origin authentication service with a multimedia stream application. The proposed approach is intended to achieve not only fast signing/verifying transaction but also low transmission overhead. In particular, the consideration on the performance is one of key issues for increasingly widespread using of wireless communication since there are lots of limitations including scarce network resources, low computing power and limited energy of nodes. To meet such requirements, we take advantage of using a short-lived key(s) that allow an authentication system to overcome the performance degradation caused by applying highly expensive cryptographic primitives such as digital signature. The major concern of this paper, therefore, is to derive an appropriate length of a key and hash value without compromising the security of an authentication system.

Namhi Kang, Younghan Kim

Weakest Link Attack on Single Sign-On and Its Case in SAML V2.0 Web SSO

In many of the single sign-on (SSO) specifications that support multitiered authentication, it is not mandatory to include the authentication context in a signed response. This can be exploited by the adversaries to launch a new kind of attack specific to SSO systems. In this paper, we propose the

Weakest Link Attack

, which is a kind of parallel session attack feasible in the above settings. Our attack enables adversaries to succeed at all levels of authentication associate to the victim user by breaking only at the weakest one. We present a detailed case study of our attack on web SSO as specified in Security Assertions Markup Language (SAML) V2.0, an OASIS standard released in March, 2005. We also suggest the corresponding repair at the end of the paper.

Yuen-Yan Chan

An Inter-domain Key Agreement Protocol Using Weak Passwords

There have been many protocols proposed over the years for password authenticated key exchange in the three-party scenario, in which two clients attempt to establish a secret key interacting with one same authentication server. However, little has been done for password authenticated key exchange in the more general and realistic four-party setting, where two clients trying to establish a secret key are registered with different authentication servers. In fact, the recent protocol by Yeh and Sun seems to be the only password authenticated key exchange protocol in the four-party setting. But, the Yeh-Sun protocol adopts the so called “hybrid model”, in which each client needs not only to remember a password shared with the server but also to store and manage the server’s public key. In some sense, this hybrid approach obviates the reason for considering password authenticated protocols in the first place; it is difficult for humans to securely manage long cryptographic keys. In this paper, we propose a new protocol designed carefully for four-party password authenticated key exchange that requires each client only to remember a password shared with its authentication server. To the best of our knowledge, our new protocol is the first password-only authenticated key exchange protocol in the four-party setting.

Youngsook Lee, Junghyun Nam, Dongho Won

A Practical Solution for Distribution Rights Protection in Multicast Environments

One of the main problems that remain to be solved in pay-per-view Internet services is copyright protection. As in almost every scenario, any copyright protection scheme has to deal with two main aspects: protect the true content authors from those who may dishonestly claim ownership of intellectual property rights and prevent piracy by detecting the authorized (but dishonest) users responsible of illegal redistribution of copies. The former aspect can be solved with watermarking techniques while for the latter, fingerprinting mechanisms are the most appropriate ones. In internet services such as Web-TV or near video on-demand where multicast is used, watermarking can be directly applied. On the other hand, multicast fingerprinting has been seldom studied because delivering different marked content for different receivers seems a contradiction with multicast basics. In this paper we present a solution to prevent unauthorized redistribution of content in multicast scenarios. The system is based on a trusted soft-engine embedded in the receiver and co-managed by the content distributor. The trusted soft-engine is responsible of the client-side multicast key management functions. It only will allow the decryption and displaying of the actual data if it has previously inserted a fingerprinting mark with the identity of the decoder. Upon finding a pirate copy of any multicast delivered content, this mark can be used to unambiguously reveal the identity of the receiver that decoded the content from which the pirate copies are made.

Josep Pegueroles, Marcel Fernández, Francisco Rico-Novella, Miguel Soriano

Audit-Based Access Control in Nomadic Wireless Environments

Wireless networks have been rapidly growing in popularity, both in consumer and commercial arenas, but their increasing pervasiveness and widespread coverage raises serious security concerns. Client devices can potentially migrate, usually passing through very light access control policies, between numerous diverse wireless environments, bringing with them software vulnerabilities and possibly malicious code. To cope with this new security threat we propose a new active third party authentication, authorization and audit/examination strategy in which, once a device enters an environment, it is subjected to security analysis by the infrastructure, and if it is found to be dangerously insecure, it is immediately taken out from the network and denied further access until its vulnerabilities have not been fixed. Encouraging results have been achieved utilizing a proof-of-concept model based on current technology and standard open source networking tools.

Francesco Palmieri, Ugo Fiore

Workshop on Optimization: Theories and Applications (OTA 2006)

Cost – Time Trade Off Models Application to Crashing Flow Shop Scheduling Problems

Many traditional cost– time trades off models are computationally expensive to use due to the complexity of algorithms especially for large scale problems. We present a new approach to adapt linear programming to solve cost time trade off problems. The proposed approach uses two different modeling flowshop scheduling into a leveled project management network.

The first model minimizes makespan subject to budget limitation and the second model minimizes total cost to determine optimum makespan over production planning horizon.

Morteza Bagherpour, Siamak Noori, S. Jafar Sadjadi

The ASALB Problem with Processing Alternatives Involving Different Tasks: Definition, Formalization and Resolution

The Alternative Subgraphs Assembly Line Balancing Problem (ASALBP) considers assembly alternatives that determine task processing times and/or precedence relations among the tasks. Capacho and Pastor [3] formalized this problem and developed a mathematical programming model (MILP) in which the assembly alternatives are determined by combining all available processing alternatives of each existing sub-assembly. In this paper an extended definition of the ASALBP is presented in which assembly sub-processes involving different tasks are also considered. Additionally, a mathematical programming model is proposed to formalize and solve the extended version of the ASALBP, which also improves the performance of the former MILP model. Some computational results are included.

Liliana Capacho, Rafael Pastor

Satisfying Constraints for Locating Export Containers in Port Container Terminals

To allocate spaces to outbound containers, the constraint satisfaction technique was applied. Space allocation is pre-assigning spaces for arriving ships so that loading operations can be performed efficiently. The constraints, which are used to maximize the efficiency of yard trucks and transfer cranes, were collected from a real container terminal and formulated in the form of constraint. Numerical experiments were conducted to evaluate the performance of the developed algorithm.

Kap Hwan Kim, Jong-Sool Lee

A Price Discrimination Modeling Using Geometric Programming

This paper presents a price discrimination model which determines the product’s selling price and marketing expenditure in two markets. We assume production as a function of price and marketing cost in two states. The cost of production is also assumed to be a function of production in both markets. We propose a Multi Objective Decision Making (MODM) method called Lexicograph Approach (LA) in order to find an efficient solution between two objectives for each market. Geometric Programming is used to solve the resulted model. In our GP implementation, we use a transformed dual problem in order to reduce the model to an optimization of a nonlinear concave function subject to some linear constraints and solved the resulted model using a simple grid line search.

Seyed J. Sadjadi, M. Ziaee

Hybrid Evolutionary Algorithms for the Rectilinear Steiner Tree Problem Using Fitness Estimation

The rectilinear Steiner tree problem (RSTP) is to find a minimum-length rectilinear interconnection of a set of terminals in the plane. A key performance measure of the algorithm for the RSTP is the reduction rate that is achieved by the difference between the objective value of the RSTP and that of the minimum spanning tree without Steiner points. We introduced four evolutionary algorithm based upon fitness estimation and hybrid operator. Experimental results show that the quality of solution is improved by the hybrid operator and the calculation time is reduced by the fitness estimation. The best evolutionary algorithm is better than the previously proposed other heuristics. The solution of evolutionary algorithm is 99.4% of the optimal solution.

Byounghak Yang

Data Reduction for Instance-Based Learning Using Entropy-Based Partitioning

Instance-based learning methods such as the nearest neighbor classifier have proven to perform well in pattern classification in several fields. Despite their high classification accuracy, they suffer from a high storage requirement, computational cost, and sensitivity to noise. In this paper, we present a data reduction method for instance-based learning, based on entropy-based partitioning and representative instances. Experimental results show that the new algorithm achieves a high data reduction rate as well as classification accuracy.

Seung-Hyun Son, Jae-Yearn Kim

Coordinated Inventory Models with Compensation Policy in a Three Level Supply Chain

In this paper, we develop inventory models for the three level supply chain (one supplier, one warehouse, and one retailer) and consider the problem of determining the optimal integer multiple

n

of time interval, time interval between successive setups and orders in the coordinated inventory model. We consider three types of individual models (independent model, retailer’s point of view model, and supplier’s point of view model). The focus of this model is minimization of the coordinated total relevant cost, and then we apply the compensation policy for the benefits and losses to our coordinated inventory model. The optimal solution procedure for the developed model is derived and the effects of the compensation policy on the optimal results are studied with the help of numerical examples.

Jeong Hun Lee, Il Kyeong Moon

Using Constraint Satisfaction Approach to Solve the Capacity Allocation Problem for Photolithography Area

This paper addresses the capacity allocation problem for photo- lithography area (CAPPA) under an advanced technology environment. The CAPPA problem has two characteristics: process window and machine dedication. Process window means that a wafer needs to be processed on machines that can satisfy its process capability (process specification). Machine dedication means that after the first critical layer of a wafer lot is being processed on a certain machine, subsequent critical layers of this lot must be processed on the same machine to ensure good quality of final products. A production plan, constructed without considering the above two characteristics, is difficult to execute and to achieve its production targets. Thus, we model the CAPPA problem as a constraint satisfaction problem (CSP), which uses an efficient search algorithm to obtain a feasible solution. Additionally, we propose an upper bound of load unbalance estimation to reduce the search space of CSP for searching an optimal solution. Experimental results show that the proposed model is useful in solving the CAPPA problem in an efficient way.

Shu-Hsing Chung, Chun-Ying Huang, Amy Hsin-I Lee

Scheduling an R&D Project with Quality-Dependent Time Slots

In this paper we introduce the concept of quality-dependent time slots in the project scheduling literature. Quality-dependent time slots refer to pre-defined time windows where certain activities can be executed under ideal circumstances (optimal level of quality). Outside these time windows, there is a loss of quality due to detrimental effects. The purpose is to select a quality-dependent time slot for each activity, resulting in a minimal loss of quality. The contribution of this paper is threefold. First, we show that an R&D project from the bio-technology sector can be transformed to a resource-constrained project scheduling problem (RCPSP). Secondly, we propose an exact search procedure for scheduling this project with the aforementioned quality restrictions. Finally, we test the performance of our procedure on a randomly generated problem set.

Mario Vanhoucke

The Bottleneck Tree Alignment Problems

Given a set

W

of

k

sequences (strings) and a tree structure

T

with

k

leaves, each of which is labeled with a unique sequence in

W

, a

tree alignment

is to label a sequence to each internal node of

T

. The weight of an edge of the tree alignment is the distance, such as

Hamming

distance,

Levenshtein

(

edit

) distance or

reversal

distance, between the two sequences labeled to the two ends of the edge. The

bottleneck tree alignment problem

is to find a tree alignment such that the weight of the largest edge is minimized. A

lifted tree alignment

is a tree alignment, where each internal node

v

is labeled one of the sequences that was labeled to the children of

v

. The

bottleneck lifted tree alignment problem

is to find a lifted tree alignment such that the weight of the largest edge is minimized. In this paper, we show that the

bottleneck tree alignment problem

is NP-complete even when the tree structure is the binary tree and the weight function is

metric

. For special cases, we present an exact algorithm to solve the

bottleneck lifted tree alignment problem

in polynomial time. If the weight function is

ultrametric

, we show that any

lifted tree alignment

is an optimal

bottleneck tree alignment

.

Yen Hung Chen, Chuan Yi Tang

Performance Study of a Genetic Algorithm for Sequencing in Mixed Model Non-permutation Flowshops Using Constrained Buffers

This paper presents the performance study of a Genetic Algorithm applied to a mixed model non-permutation flowshop production line. Resequencing is permitted where stations have access to intermittent or centralized resequencing buffers. The access to the buffers is restricted by the number of available buffer places and the physical size of the products. Characteristics such as the difference between the intermittent and the centralized case, the number of buffer places and the distribution of the buffer places are analyzed. Improvements that come with the introduction of constrained resequencing buffers are highlighted.

Gerrit Färber, Anna M. Coves Moreno

Optimizing Relative Weights of Alternatives with Fuzzy Comparative Judgment

This paper presents an optimal weighting approach for maximizing the overall preference value of decision alternatives based on a given set of weights and performance ratings. In policy analysis settings, relative weights for policy alternatives are subjectively assessed by a group of experts or stakeholders via surveys using comparative judgment. A hierarchical pairwise comparison process is developed to help make comparative judgment among a large number of alternatives with fuzzy ratio values. Performance ratings for policy alter- natives are obtained from objective measurement or subjective judgement. The preference value of an expert on a policy alternative is obtained by multiplying the weight of the alternative by its performance rating. Two optimization models are developed to determine the optimal weights that maximize the overall preference value of all experts or stakeholders. An empirical study of evaluating Taiwan’s air cargo development strategies is conducted to illustrate the approach.

Chung-Hsing Yeh, Yu-Hern Chang

Model and Solution for the Multilevel Production-Inventory System Before Ironmaking in Shanghai Baoshan Iron and Steel Complex

This research deals with the production-inventory problem originating from the ironmaking production system in Shanghai Baoshan Iron and Steel Complex (Baosteel). To solve this multilevel, multi-item, multi-period, capacitated lot-sizing problem, a deterministic mixed integer programming (MIP) model based on the minimization of production and inventory costs is formulated to determine the production and inventory quantities of all materials in each time period under material-balance and capacity constraints. Due to the large-scale variables and constraints in this model, a decomposition Lagrangian relaxation (LR) approach is developed. Illustrative numerical examples based upon the actual production data from Baosteel are given to demonstrate the effectiveness of the proposed LR approach.

Guoli Liu, Lixin Tang

A Coordination Algorithm for Deciding Order-Up-To Level of a Serial Supply Chain in an Uncertain Environment

This study proposed a method for coordinating the order-up-to level inventory decisions of isolated stations to cooperatively compensate loss in supply chain performance in a serial supply chain due to unreliable raw material supply. Doing so will improve the customer refill rate and lower the production cost within an uncertain environment in which the chain is highly decentralized, both material supply and customer demand are unreliable, and delay incurs in information flow and material flow. The proposed coordination method is then compared to single-station and multiple-station order-up-to policies. Simulated experiments reveal that our proposed method outperforms the others and performs robustly within a variety of demand and supply situations.

Kung-Jeng Wang, Wen-Hai Chih, Ken Hwang

Optimization of Performance of Genetic Algorithm for 0-1 Knapsack Problems Using Taguchi Method

In this paper, a genetic algorithm (GA) is developed for solving 0-1 knapsack problems (KPs) and performance of the GA is optimized using Taguchi method (TM). In addition to population size, crossover rate, and mutation rate, three types of crossover operators and three types of reproduction operators are taken into account for solving different 0-1 KPs, each has differently configured in terms of size of the problem and the correlation among weights and profits of items. Three sizes and three types of instances are generated for 0-1 KPs and optimal values of the genetic operators for different types of instances are investigated by using TM. We discussed not only how to determine the significantly effective parameters for GA developed for 0-1 KPs using TM, but also trace how the optimum values of the parameters vary regarding to the structure of the problem.

A. S. Anagun, T. Sarac

Truck Dock Assignment Problem with Time Windows and Capacity Constraint in Transshipment Network Through Crossdocks

In this paper, we consider the over-constrained truck dock assignment problem with time windows and capacity constraint in transshipment network through crossdocks where the number of trucks exceeds the number of docks available, the capacity of the crossdock is limited, and where the objective is to minimize the total shipping distances. The problem is first formulated as an Integer Programming (IP) model, and then we propose a Tabu Search (TS) and a Genetic algorithms (GA) that utilize the IP constraints. Computational results are provided, showing that the heuristics perform better than the CPLEX Solver in both small-scale and large-scale test sets. Therefore, we conclude that the heuristic search approaches are efficient for the truck dock assignment problem.

Areas:

Heuristics, industrial applications of AI.

Andrew Lim, Hong Ma, Zhaowei Miao

An Entropy Based Group Setup Strategy for PCB Assembly

Group setup strategy exploits the PCB similarity in forming the families of boards to minimize makespan that is composed of two attributes, the setup time and the placement time. The component similarity of boards in families reduces the setup time between families meanwhile, the geometric similarity reduces the placement time of boards within families. Current group setup strategy considers the component similarity and the geometric similarity by giving equal weights or by considering each similarity sequentially. In this paper, we propose an improved group setup strategy which combines component similarity and geometric similarity simultaneously. The entropy method is used to determine the weight of each similarity by capturing the importance of each similarity in different production environments. Test results show that the entropy based group setup strategy outperforms existing group setup strategies.

In-Jae Jeong

Cross-Facility Production and Transportation Planning Problem with Perishable Inventory

This study addresses a production and distribution planning problem in a dynamic, two-stage supply chain. This supply chain consists of a number of facilities and retailers. The model considers that the final product is perishable and therefore has a limited shelf life. We formulate this problem as a network flow problem with a fixed charge cost function which is

NP

-hard. A primal-dual heuristic is developed that provides lower and upper bounds. The models proposed can be used for operational decisions.

Sandra Duni Ekşioğlu, Mingzhou Jin

A Unified Framework for the Analysis of M/G/1 Queue Controlled by Workload

sIn this paper, we develop a unified framework for the analysis of the waiting time, the sojourn time and the queue length of the

M

/

G

/1 queue in which the server is controlled by workload. We apply our framework to the

D

-policy

M

/

G

/1 queue and confirm the results that already exist in the literature. We also use our approach and derive, for the first time, the sojourn time distribution of an arbitrary customer in the

M

/

G

/1 queue under

D

-policy. The methodologies developed in this paper can be applied to a wide range of

M

/

G

/1 queueing systems in which the server is controlled by workload.

Ho Woo Lee, Se Won Lee, Won Ju Seo, Sahng Hoon Cheon, Jongwoo Jeon

Tabu Search Heuristics for Parallel Machine Scheduling with Sequence-Dependent Setup and Ready Times

We consider the problem of scheduling a set of independent jobs on parallel machines for the objective of minimizing total tardiness. Each job may have sequence-dependent setup and distinct ready times, i.e., the time at which the job is available for processing. Due to the complexity of the problem, tabu search heuristics are suggested that incorporate new methods to generate the neighborhood solutions. Three initial solution methods are also suggested. Computational experiments are done on a number of randomly generated test problems, and the results show that the tabu search heuristics suggested in this paper outperform the existing one.

Sang-Il Kim, Hyun-Seon Choi, Dong-Ho Lee

The Maximum Integer Multiterminal Flow Problem

Given an edge-capacitated graph and

kterminal

vertices, the

maximum integer multiterminal flow

problem (

MaxIMTF

) is to route the maximum number of flow units between the terminals. For directed graphs, we introduce a new parameter

k

L

k

and prove that

MaxIMTF

is

$\mathcal{NP}$

-hard when

k

=

k

L

= 2 and when

k

L

= 1 and

k

= 3, and polynomial-time solvable when

k

L

= 0 and when

k

L

= 1 and

k

= 2. We also give an 2 log

2

(

k

L

+ 2)-approximation algorithm for the general case. For undirected graphs, we give a family of valid inequalities for

MaxIMTF

that has several interesting consequences, and show a correspondence with valid inequalities known for

MaxIMTF

and for the associated

minimum multiterminal cut problem

.

Cédric Bentz

Routing with Early Ordering for Just-In-Time Manufacturing Systems

Parts required in Just-In-Time manufacturing systems are usually picked up from suppliers on a daily basis, and the routes are determined based on average demand. Because of high demand variance, static routes result in low truck utilization and occasional overflow. Dynamic routing with limited early ordering can significantly reduce transportation costs. An integrated mixed integer programming model is presented to capture transportation cost, early ordering inventory cost and stop cost with the concept of rolling horizon. A four-stage heuristic algorithm is developed to solve a real-life problem. The stages of the algorithms are: determining the number of trucks required, grouping, early ordering, and routing. Significant cost savings is estimated based on real data.

Mingzhou Jin, Kai Liu, Burak Eksioglu

A Variant of the Constant Step Rule for Approximate Subgradient Methods over Nonlinear Networks

The efficiency of the network flow techniques can be exploited in the solution of nonlinearly constrained network flow problems (NCNFP) by means of approximate subgradient methods (ASM). We propose to solve the dual problem by an ASM that uses a variant of the well-known constant step rule of Shor. In this work the kind of convergence of this method is analyzed and its efficiency is compared with that of other approximate subgradient methods over NCNFP.

Eugenio Mijangos

On the Optimal Buffer Allocation of an FMS with Finite In-Process Buffers

This paper considers a buffer allocation problem of flexible manufacturing system composed of several parallel workstations each with both limited input and output buffers, where machine blocking is allowed and two automated guided vehicles are used for input and output material handling. Some interesting properties are derived that are useful for characterizing optimal allocation of buffers for the given FMS model. By using the properties, a solution algorithm is exploited to solve the optimal buffer allocation problem, and a variety of differently-sized decision parameters are numerically tested to show the efficiency of the algorithm.

Soo-Tae Kwon

Optimization Problems in the Simulation of Multifactor Portfolio Credit Risk

We consider some optimization problems arising in an efficient simulation method for the measurement of the tail of portfolio credit risk. When we apply an importance sampling (IS) technique, it is necessary to characterize the important regions. In this paper, we consider the computation of directions for the IS, which becomes hard in multifactor case. We show this problem is NP-hard. To overcome this difficulty, we transform the original problem to subset sum and quadratic optimization problems. We support numerically that these re-formulation is computationally tractable.

Wanmo Kang, Kyungsik Lee

Two-Server Network Disconnection Problem

Consider a set of users and servers connected by a network. Each server provides a unique service which is of certain benefit to each user. Now comes an attacker, who wishes to destroy a set of edges of the network in the fashion that maximizes his net gain, namely, the total disconnected benefit of users minus the total edge-destruction cost. We first discuss that the problem is polynomially solvable in the single-server case. In the multiple-server case, we will show, the problem is, however,

NP

-hard. In particular, when there are only two servers, the network disconnection problem becomes intractable. Then a

$\frac{3}{2}$

-approximation algorithm is developed for the two-server case.

Byung-Cheon Choi, Sung-Pil Hong

One-Sided Monge TSP Is NP-Hard

The Travelling Salesman Problem (TSP) is a classical NP-hard optimisation problem. There exist, however, special cases of the TSP that can be solved in polynomial time. Many of the well-known TSP special cases have been characterized by imposing special

four-point conditions

on the underlying distance matrix. Probably the most famous of these special cases is the TSP on a

Monge matrix

, which is known to be polynomially solvable (as are some other generally NP-hard problems restricted to this class of matrices). By relaxing the four-point conditions corresponding to Monge matrices in different ways, one can define other interesting special cases of the TSP, some of which turn out to be polynomially solvable, and some NP-hard. However, the complexity status of one such relaxation, which we call

one-sided Monge TSP

(also known as the TSP on a

relaxed Supnick matrix

), has remained unresolved. In this note, we show that this version of the TSP problem is NP-hard. This completes the full classification of all possible four-point conditions for symmetric TSP.

Vladimir Deineko, Alexander Tiskin

On Direct Methods for Lexicographic Min-Max Optimization

The approach called the Lexicographic Min-Max (LMM) optimization depends on searching for solutions minimal according to the lex-max order on a multidimensional outcome space. LMM is a refinement of the standard Min-Max optimization, but in the former, in addition to the largest outcome, we minimize also the second largest outcome (provided that the largest one remains as small as possible), minimize the third largest (provided that the two largest remain as small as possible), and so on. The necessity of point-wise ordering of outcomes within the lexicographic optimization scheme causes that the LMM problem is hard to implement. For convex problems it is possible to use iterative algorithms solving a sequence of properly defined Min-Max problems by eliminating some blocked outcomes. In general, it may not exist any blocked outcome thus disabling possibility of iterative Min-Max processing. In this paper we analyze two alternative optimization models allowing to form lexicographic sequential procedures for various nonconvex (possibly discrete) LMM problems. Both the approaches are based on sequential optimization of directly defined artificial criteria. The criteria can be introduced into the original model with some auxiliary variables and linear inequalities thus the methods are easily implementable.

Włodzimierz Ogryczak, Tomasz Śliwiński

Multivariate Convex Approximation and Least-Norm Convex Data-Smoothing

The main contents of this paper is two-fold. First, we present a method to approximate multivariate convex functions by piecewise linear upper and lower bounds. We consider a method that is based on function evaluations only. However, to use this method, the data have to be convex. Unfortunately, even if the underlying function is convex, this is not always the case due to (numerical) errors. Therefore, secondly, we present a multivariate data-smoothing method that smooths nonconvex data. We consider both the case that we have only function evaluations and the case that we also have derivative information. Furthermore, we show that our methods are polynomial time methods. We illustrate this methodology by applying it to some examples.

Alex Y. D. Siem, Dick den Hertog, Aswin L. Hoffmann

Linear Convergence of Tatônnement in a Bertrand Oligopoly

We show the linear convergence of the tatônnement scheme in a Bertrand oligopoly price competition game using a possibly asymmetric attraction demand model with convex costs. To demonstrate this, we also show the existence of the equilibrium.

Guillermo Gallego, Woonghee Tim Huh, Wanmo Kang, Robert Phillips

Design for Using Purpose of Assembly-Group

In this paper, the disassemblability is determined by the detail weighting factors according to the using purpose of assembly-group. Based on the disassembly mechanism and the characteristics of parts and assembly-groups, the disassembly functions are classified into three categories; accessibility, transmission of disassembly power and disassembly structure. To determine the influencing parameters, some assembly-groups of an automobile are disassembled. The weighting values for the influencing factors are calculated by using of AHP (Analytic Hierarchy Process). Using these weighting values, the point tables for the using purpose are determined. Finally, an optimal design guideline for the using purpose of an assembly-group can be decided.

Hak-Soo Mok, Chang-Hyo Han, Chan-Hyoung Lim, John-Hee Hong, Jong-Rae Cho

A Conditional Gaussian Martingale Algorithm for Global Optimization

A new stochastic algorithm for determination of a global minimum of a real valued continuous function defined on

K

, a compact set of ℝ

n

, having an unique global minimizer in

K

is introduced and studied, a context discussion is presented and implementations are used to compare the performance of the algorithm with other algorithms. The algorithm may be thought to belong to the

random search

class but although we use Gaussian distributions, the mean is changed at each step to be the intermediate minimum found at the preceding step and the standard deviations, on the diagonal of the covariance matrix, are halved from one step to the next. The convergence proof is simple relying on the fact that the sequence of intermediate random minima is an uniformly integrable conditional Gaussian martingale.

Manuel L. Esquível

Finding the Number of Clusters Minimizing Energy Consumption of Wireless Sensor Networks

Wireless sensor network consisting of a large number of small sensors with low-power can be an effective tool for the collection and integration of data in a variety of environments. Here data exchange between the sensors and base station need to be designed to conserve the limited energy resources of the sensors. Grouping the sensors into clusters has been recognized as an efficient approach for saving the energy of the sensor nodes. In this paper we propose an analytical model based on homogenous spatial Poisson process, which allows the number of clusters in a sensor network minimizing the total energy spent for data exchange. Computer simulation on various size sensor networks with the LEACH algorithm reveals that the proposed model is very accurate. We also compare the proposed model with an earlier one, and it turns out that the proposed model is more accurate.

Hyunsoo Kim, Hee Yong Youn

A Two-Echelon Deteriorating Production-Inventory Newsboy Model with Imperfect Production Process

This paper discusses a two-echelon distribution-free deteriorating production-inventory newsboy model. Distinct from previous models, this study considers a deteriorating commodity with the imperfect production process. The model also considers JIT (Just In Time) deliveries and integrates the marketing and manufacturing channels. This model can be used to solve the production and replenishment problem for the manufacturer and the retailer. We derive the optimal production size, the wholesale price and the optimal number of deliveries. The model takes an insight to the defect-rate reduction. We find an upper bound for the optimal number of material’s JIT deliveries. The necessary and sufficient conditions are explored to gain managerial insights and economic implications.

Hui-Ming Wee, Chun-Jen Chung

Mathematical Modeling and Tabu Search Heuristic for the Traveling Tournament Problem

As professional sports have become big businesses all over the world, many researches with respect to sports scheduling problem have been worked over the last two decades. The traveling tournament problem (TTP) is defined as minimizing total traveling distance for all teams in the league. In this study, a mathematical model for the TTP is presented. This model is formulated using an integer programming (IP). In order to solve practical problems with large size of teams, a tabu search heuristic is suggested. Also, the concepts of alternation and intimacy were introduced for effective neighborhood search. Experiments with several instances are tested to evaluate their performances. It was shown that the proposed heuristic shows good performances with computational efficiency.

Jin Ho Lee, Young Hoon Lee, Yun Ho Lee

An Integrated Production-Inventory Model for Deteriorating Items with Imperfect Quality and Shortage Backordering Considerations

In this study we present a production-inventory model for deteriorating item with vendor-buyer integration. A periodic delivery policy for a vendor and a production-inventory model with imperfect quality for a buyer are established. Such implicit assumptions (deteriorating items, imperfect quality) are reasonable in view of the fact that poor-quality items do exist during production. Defective items are picked up during the screening process. Shortages are completely backordered. The study shows that our model is a generalization of the models in current literatures. An algorithm and numerical analysis are given to illustrate the proposed solution procedure. Computational results indicate that our model leads to a more realistic result.

H. M. Wee, Jonas C. P. Yu, K. J. Wang

A Clustering Algorithm Using the Ordered Weight Sum of Self-Organizing Feature Maps

Clustering is to group similar objects into clusters. Until now there are a lot of approaches using Self-Organizing Feature Maps(SOFMs). But theyhave problems with a small output-layer nodes and initial weight. This paper suggests one-dimensional output-layer nodes in SOFMs. The number of output-layer nodes is more than those of clusters intended to find and the order of output-layer nodes is ascending in the sum of the output-layer node’s weight. We can find input data in SOFMs output node and classify input data in output nodes using the Euclidean Distance. The suggested algorithm was tested on well-known IRIS data and machine-part incidence matrix. The results of this computational study demonstrate the superiority of the suggested algorithm.

Jong-Sub Lee, Maing-Kyu Kang

Global Optimization of the Scenario Generation and Portfolio Selection Problems

We consider the global optimization of two problems arising from financial applications. The first problem originates from the portfolio selection problem when high-order moments are taken into account. The second issue we address is the problem of scenario generation. Both problems are non-convex, large-scale, and highly relevant in financial engineering. For the two problems we consider, we apply a new stochastic global optimization algorithm that has been developed specifically for this class of problems. The algorithm is an extension to the constrained case of the so called diffusion algorithm. We discuss how a financial planning model (of realistic size) can be solved to global optimality using a stochastic algorithm. Initial numerical results are given that show the feasibility of the proposed approach.

Panos Parpas, Berç Rustem

A Generalized Fuzzy Optimization Framework for R&D Project Selection Using Real Options Valuation

Global marketplace and intense competition in the business environment lead organizations to focus on selecting the best R&D project portfolio among available projects using their scarce resources in the most effective manner. This happens to be a

sine qua non

for high technology firms to sharpen their competitive advantage and realize long-term survival with sustainable growth. To accomplish that, firms should take into account both the uncertainty inherent in R&D using appropriate valuation techniques accounting for flexibility in making investment decisions and all possible interactions between the candidate projects within an optimization framework. This paper provides a fuzzy optimization model for dealing with the complexities and uncertainties regarding the construction of an R&D project portfolio. Real options analysis, which accounts for managerial flexibility, is employed to correct the deficiency of traditional discounted cash flow valuation that excludes any form of flexibility. An example is provided to illustrate the proposed decision approach.

E. Ertugrul Karsak

Supply Chain Network Design and Transshipment Hub Location for Third Party Logistics Providers

Transshipment hubs are the places where cargo can be re-consolidated and transportation modes can be changed. Transshipment hubs are widely used in logistics industry. In this article, we consider a supply chain network design problem for a third party logistics provider in which we obtain the locations of transshipment hubs among promising hub locations for the objective of minimizing the total system cost that includes the transportation cost, the fixed cost and the processing cost. In the problem, the unit cost of transporting cargo between a pair of origin-destination is non-linear and is a decreasing step function, and each unit cargo through the transshipment hub is charged a fixed processing cost. The problem is considered a mixed integer programming problem. However, due to the complexity of the problem, a heuristic solution approach is proposed. The proposed solution is then implemented to a third party logistics provider and their experience shows that significant cost savings are obtained, compared with the current practice.

Seungwoo Kwon, Kyungdo Park, Chulung Lee, Sung-Shick Kim, Hak-Jin Kim, Zhong Liang

A Group Search Optimizer for Neural Network Training

A novel optimization algorithm: Group Search Optimizer (GSO) [1] has been successfully developed, which is inspired by animal behavioural ecology. The algorithm is based on a Producer-Scrounger model of animal behaviour, which assumes group members search either for ‘finding’ (producer) or for ‘joining’ (scrounger) opportunities. Animal scanning mechanisms (

e.g.

, vision) are incorporated to develop the algorithm. In this paper, we apply the GSO to Artificial Neural Network (ANN) training to further investigate its applicability to real-world problems. The parameters of a 3-layer feed-forward ANN, including connection weights and bias are tuned by the GSO algorithm. Two real-world classification problems have been employed as benchmark problems trained by the ANN, to assess the performance of the GSO-trained ANN (GSOANN). In comparison with other sophisticated machine learning techniques proposed for ANN training in recent years, including some ANN ensembles, GSOANN has a better convergence and generalization performances on the two benchmark problems.

S. He, Q. H. Wu, J. R. Saunders

Application of Two-Stage Stochastic Linear Program for Portfolio Selection Problem

We consider a portfolio selection problem under the consideration of dynamic closing time of the portfolio. The selection strategy is to take the long position on the stocks and the short position on an index future. We will close our portfolio whenever our profit exceeds the predetermined target during the investment period, otherwise, we will own the portfolio till the maturity date of the future. Our purpose is to have a profitable portfolio with steady return which is higher than the interest rate of savings and independent of the market. To deal with the stocks selection problem and, at the same time, the uncertainty on the closing time due to the fluctuation of the market, we define the corresponding optimization problem as a two-stage stochastic linear program (two-stage SLP). Our models are tested by the real-world data and the results are consistent with what we expected.

Kuo-Hwa Chang, Huifen Chen, Ching-Fen Lin

General Tracks

Hierarchical Clustering Algorithm Based on Mobility in Mobile Ad Hoc Networks

This paper proposes a hierarchical clustering method based on the relative mobility pattern in mobile ad hoc environment. The relative mobility pattern between two nodes is evaluated by using a received message and the cluster is created by grouping nodes with a relative mobility below a specific threshold. Also, we create a hierarchical clustering structure by allowing a merge among clusters based on the mobility pattern. In this way, the proposed mechanism can increase a continuity of total cluster structure. Since we allow the combination of clusters, we can reduce the number of cluster and message required for a routing. To evaluate a performance of our mechanism, we compared ours with the existing LCC and WCA by a Glomosim. The simulation results show that our scheme can provide the higher stability and efficiency than existing schemes.

Sulyun Sung, Yuhwa Seo, Yongtae Shin

An Alternative Approach to the Standard Enterprise Resource Planning Life Cycle: Enterprise Reference Metamodeling

The Enterprise Resource Planning (ERP) systems development is based on the initial definition of an enterprise reference model, whose richness and generality will basically determine the flexibility of the resulting software package to accommodate specific requirements. The desired accommodation is rarely accomplished without costly customization processes, which are frequently unaffordable for the small and medium enterprises. In this paper, an alternative ERP development approach, stemming from the implementation of an enterprise reference metamodel, is proposed. We present an analysis of the resulting alternative ERP life cycle, particularly focusing on the metamodel that constitutes the core of the proposal. The results obtained in a complete enterprise software development project, encompassing software package development, tailored implementation and some post-implementation customization, show that the proposed approach facilitates the identification and fulfillment of the customer’s specific requests at a reduced cost, even if they arise after the implementation.

Miguel Gutiérrez, Alfonso Durán, Pedro Cocho

Static Analysis Based Software Architecture Recovery

Recover the software architectures is a key step in the reengineering legacy (procedural) programs into an object-oriented platform. Identifying, extracting and reengineering software architectures that implement abstractions within existing systems is a promising cost-effective way to create reusable assets and reengineer legacy systems. We introduce a new approach to recover software architectures in legacy systems. The approach described in this paper concentrate especially on how to find software architectures and on how to establish the relationships of the identified software components. This paper summarizes our experiences with using computer-supported methods to facilitate the reuse of the software architectures of the legacy systems by recovering the behavior of the systems using systematic methods, and illustrate their use in the context of the Janus System.

Jiang Guo, Yuehong Liao, Raj Pamula

A First Approach to a Data Quality Model for Web Portals

The technological advances and the use of the internet have favoured the appearance of a great diversity of web applications, among them Web Portals. Through them, organizations develop their businesses in a really competitive environment. A decisive factor for this competitiveness is the assurance of data quality. In the last years, several research works on Web Data Quality have been developed. However, there is a lack of specific proposals for web portals data quality. Our aim is to develop a data quality model for web portals focused on three aspects: data quality expectations of data consumer, the software functionality of web portals and the web data quality attributes recompiled from a literature review. In this paper, we will present the first version of our model.

Angelica Caro, Coral Calero, Ismael Caballero, Mario Piattini

Design for Environment-Friendly Product

This paper presents an approach for systematically integrating design for x (DFX) into environmentally conscious product design. Evaluation methods and algorithms for design for assembly, design for disassembly, and design for recycling are proposed to estimate the assembly times and the disassembly times. By these results could be evaluated the assemblability, disassemblability, and recyclability of parts. Finally, a new integrated DFX system is implemented to help designers of product analyze environment-friendly products during the design process.

Hak-Soo Mok, Jong-Rae Cho, Kwang-Sup Moon

Performance of HECC Coprocessors Using Inversion-Free Formulae

The HyperElliptic Curve Cryptosystem (HECC) was quite extensively studied during the recent years. In the open literature one can find results on how to improve the group operations of HECC as well as teh implementations for various types of processors. There have also been some efforts to implement HECC on hardware devices, like for instance FPGAs. Only one of these works, however, deals with the inversion-free formulae to compute the group operations of HECC.

We present inversion-free group operations for the HEC

y

2

+

xy

=

x

5

+

f

1

x

+

f

0

and we target characteristic-two fields. The reason is that of allowing a fair comparison with hardware architectures using the affine case presented in [BBWP04]. In the main part of the paper we use these results to investigate various hardware architectures for a HECC VLSI coprocessor. If area constraints are not considered, scalar multiplication can be performed in 19,769 clock cycles using three field multipliers (of type

D

= 32), one field adder and one field squarer, where

D

indicates the digit-size of the multiplier. However, the optimal solution in terms of latency and area uses two multipliers (of type

D

= 4), one addition and one squaring. The main finding of the present contribution is that coprocessors based on the inversion-free formulae should be preferred compared to those using group operations containing inversion. This holds despite the fact that one field inversion in the affine HECC group operation is traded by up to 24 field multiplications in the inversion-free case.

Thomas Wollinger, Guido Bertoni, Luca Breveglieri, Christof Paar

Metrics of Password Management Policy

The necessity to management the computer security of an institution implies an evaluation phase and the most common method to carry out this evaluation it consists on the use of a set of metrics. As any system of information needs of an authentication mechanism being the most used one those based on password, in this article we propose a set of metric of password management policies based on the most outstanding factors in this authentication mechanism. Together with the metrics, we propose a quality indicator derived from these metrics that allows us to have a global vision of the quality of the password management policy used and a complete example of calculation of the proposed metric. Finally, we will indicate the future works to be performed to check the validity and usefulness of the proposed metrics.

Carlos Villarrubia, Eduardo Fernández-Medina, Mario Piattini

Using UML Packages for Designing Secure Data Warehouses

Due to the sensitive data contained in Data Warehouses (DWs), it is essential to specify security measures from the early stages of the DWs design and enforce them. In this paper, we will present a UML profile to represent multidimensional and security aspects of our conceptual modeling. Our approach proposes the use of UML packages in order to group classes together into higher level units creating different levels of abstraction, and therefore, simplifying the final model. Furthermore, we present an extension of the relational model to consider security and audit measures represented in the conceptual modeling. To accomplish this, we based on the Relational Package of the Common Warehouse Metamodel (CWM) and extend it to properly represent all security and audit rules defined in the conceptual modeling of DWs. Finally, we will show an example to illustrate the applicability of our proposal.

Rodolfo Villarroel, Emilio Soler, Eduardo Fernández-Medina, Juan Trujillo, Mario Piattini

Practical Attack on the Shrinking Generator

This work proposes an efficient attack on the Shrinking Generator based on its characterization by means of Linear Hybrid Cellular Automata. The algorithm uses the computation of the characteristic polynomials of specific sub-automata and the generation of the Galois field associated to one of the Linear Feedback Shift Registers components of the generator. Both theoretical and empirical evidences for the effectiveness of the attack are given. The two main advantages of the described cryptanalysis are the determinism of bits prediction and the possible application of the obtained results to different generators.

Pino Caballero-Gil, Amparo Fúster-Sabater

A Comparative Study of Proposals for Establishing Security Requirements for the Development of Secure Information Systems

Nowadays, security solutions are focused mainly on providing security defences, instead of solving one of the main reasons for security problems that refers to an appropriate Information Systems (IS) design. In this paper a comparative analysis of eight different relevant technical proposals, which place great importance on the establishing of security requirements in the development of IS, is carried out. And they provide some significant contributions in aspects related to security. These can serve as a basis for new methodologies or as extensions to existing ones. Nevertheless, they only satisfy partly the necessary criteria for the establishment of security requirements, with guarantees and integration in the development of IS. Thus we conclude that they are not specific enough for dealing with security requirements in the first stages of software development in a systematic and intuitive way, though parts of the proposals, if taken as complementary measures, can be used in that manner.

Daniel Mellado, Eduardo Fernández-Medina, Mario Piattini

Stochastic Simulation Method for the Term Structure Models with Jump

Monte Carlo Method as a stochastic simulation method is used to evaluate many financial derivatives by financial engineers. Monte Carlo simulation is harder and more difficult to implement and analyse in many fields than other numerical methods. In this paper, we derive term structure models with jump and perform Monte Carlo simulations for them. We also make a comparison between the term structure models of interest rates with jump and HJM models based on jump. Bond pricing with Monte Carlo simulation is investigated for the term structure models with jump.

Kisoeb Park, Moonseong Kim, Seki Kim

The Ellipsoidal l p Norm Obnoxious Facility Location Problem

We consider locating an obnoxious facility. We use the weighted ellipsoidal

l

p

norm to accurately measure the distance. We derive the necessary and sufficient conditions for the local optimality and transform the optimality conditions into a system of nonlinear equations. We use Newton’s method with perturbed nonmonotone line search to solve the equations. Some numerical experiments are presented.

Yu Xia

On the Performance of Recovery Rate Modeling

To ensure accurate predictions of loss given default it is necessary to test the goodness-of-fit of the recovery rate data to the Beta distribution, assuming that its parameters are unknown. In the presence of unknown parameters, the Cramer-von Mises test statistic is neither asymptotically distribution free nor parameter free. In this paper, we propose to compute approximated critical values with a parametric bootstrap procedure. Some simulations show that the bootstrap procedure works well in practice.

J. Samuel Baixauli, Susana Alvarez

Using Performance Profiles to Evaluate Preconditioners for Iterative Methods

We evaluate performance profiles as a method for comparing preconditioners for iterative solvers by using them to address three questions that have previously been asked about incomplete LU preconditioners. For example, we use performance profiles to quantify the observation that if a system can be solved by a preconditioned iterative solver, then that solver is likely to use less space, and not much more time, than a direct solver. In contrast, we also observe that performance profiles are difficult to use for choosing specific parameter values. We end by discussing the role performance profiles might eventually play in helping users choose a preconditioner.

Michael Lazzareschi, Tzu-Yi Chen

Multicast ω-Trees Based on Statistical Analysis

In this paper, we study the efficient multicast routing tree problem with QoS requirements. The new multicast weight parameter is proposed by efficiently combining two independent measures, the link cost and delay. The weight

ω

∈[0,1] plays an important role in combining the two measures. If the

ω

approaches to 0, then the tree delay is decreasing. On the contrary if it closes to 1, the tree cost is decreasing. Therefore, if the

ω

is decided, then the efficient multicast tree can be found. A case study shows various multicast trees for each

ω

. When network users have various QoS requirements, the proposed multicast weight parameter is very informative for them.

Moonseong Kim, Young-Cheol Bang, Hyunseung Choo

The Gateways Location and Topology Assignment Problem in Hierarchical Wide Area Networks: Algorithms and Computational Results

This paper studies the problem of designing two-level hierarchical structure wide area network. The goal is to select gateways location, 2nd level network topology, channels capacities and flow routes in order to minimize the total average delay per packet in hierarchical network subject to budget constraint. The problem is NP-complete. Then, the branch and bound method is used to construct the exact algorithm. Also a heuristic algorithm is proposed. Some computational results are reported. Based on computational experiments, several properties of the considered problem, important from practical point of view, are formulated.

Przemyslaw Ryba, Andrzej Kasprzak

Developing an Intelligent Supplier Chain System Collaborating with Customer Relationship Management

We propose an intelligent supply chain system collaborating with customer relationship management system in order to assess change in a supply partner’s capability over a period of time. The system embeds machine learning methods and is designed to evaluate a partner’s supply capability that can change over time and to satisfy different procurement conditions across time periods. We apply the system to the procurement and management of the agricultural industry.

Gye Hang Hong, Sung Ho Ha

The Three-Criteria Servers Replication and Topology Assignment Problem in Wide Area Networks

The designing of wide area networks is usually an optimization process with accurately selected optimization criterion. The most utilized criterions are the quality of service in the network (indicated by average packet delay), the capacity cost (channel capacity leasing cost) and server costs (cost of connecting servers or replicas at nodes). This paper studies the problem of designing wide area networks with taking into account those three criteria. Then, the goal is select servers replica allocation at nodes, network topology, channel capacities and flow routes in order to minimize the linear combination of average delay per packet, capacity cost and server cost. The problem is NP-complete. An exact algorithm, based on the branch and bound method is proposed. Some computational results are reported and several properties of the considered problem are formulated.

Marcin Markowski, Andrzej Kasprzak

An Efficient Multicast Tree with Delay and Delay Variation Constraints

With the rapid evolution of real time multimedia applications like audio/video conferencing, interactive distributed games and real time remote control system, a certain Quality of Service (QoS) needs to be guaranteed in underlying networks. Multicast routing algorithms should support the required QoS. There are two important QoS parameters, bounded delay and delay variation, that need to be guaranteed in order to support the real time multimedia applications. Here we solve Delay and delay Variation Bounded Multicast Tree (DVBMT) problem which has been proved to NP-complete. In this paper, we propose an efficient algorithm for DVBMT. The performance enhancement is up to about 21.7% in terms of delay variation as compared to the well-known algorithm, KBC [9].

Moonseong Kim, Young-Cheol Bang, Jong S. Yang, Hyunseung Choo

Algorithms on Extended (δ, γ)-Matching

Approximate pattern matching plays an important role in various applications, such as bioinformatics, computer-aided music analysis and computer vision. We focus on (

δ

,

γ

)-matching. Given a text

T

of length

n

, a pattern

P

of length

m

,and two parameters

δ

and

γ

, the aim is to find all the substring

T

[

i

,

i

+

m

–1] such that (a) ∀ 1 ≤

j

m

, |

T

[

i

+

j

–1] –

P

[

j

]| ≤

δ

(

δ

-matching) , and (b) ∑

1 ≤ j ≤ m

|

T

[

i

+

j

–1] –

P

[

j

]| ≤

γ

(

γ

-matching). In this paper we consider three variations of (

δ

,

γ

)-matching:

amplified matching

,

transposition-invariant matching

, and

amplified transposition-invariant matching

. For each problem we propose a simple and efficient algorithm which is easy to implement.

Inbok Lee, Raphaël Clifford, Sung-Ryul Kim

SOM and Neural Gas as Graduated Nonconvexity Algorithms

Convergence of the Self-Organizing Map (SOM) and Neural Gas (NG) is usually contemplated from the point of view of stochastic gradient descent. This class of algorithms is characterized by a very slow convergence rate. However we have found empirically that One-Pass realizations of SOM and NG provide good results or even improve over the slower realizations, when the performance measure is the distortion. One-Pass realizations use each data sample item only once, imposing a very fast reduction of the learning parameters that does not conform to the convergence requirements of stochastic gradient descent. That empirical evidence leads us to propose that the appropriate setting for the convergence analysis of SOM, NG and similar competitive clustering algorithms is the field of Graduated Nonconvexity algorithms. We show they can easily be put in this framework.

Ana I. González, Alicia D’Anjou, M. Teresa García-Sebastian, Manuel Graña

Analysis of Multi-domain Complex Simulation Studies

Complex simulations are increasingly important in systems analysis and design. In some cases simulations can be exhaustively validated against experiment and taken to be implicitly accurate. However, in domains where only limited validation of the simulations can be performed, implications of simulation studies have historically been qualitative. Validation is notably difficult in cases where experiments are expensive or otherwise prohibitive, where experimental effects are difficult to measure, and where models are thought to have unaccounted systematic error. This paper describes an approach to integrate simulation experiments with empirical data that has been applied successfully in a number of domains. This methodology generates coherent estimates of confidence in model predictions, model parameters, and estimates, i.e. calibrations, for unobserved variables. Extensions are described to integrate the results of separate experiments into a single estimate for simulation parameters, which demonstrates a new approach to model-based data fusion.

James R. Gattiker, Earl Lawrence, David Higdon

A Fast Method for Detecting Moving Vehicles Using Plane Constraint of Geometric Invariance

This paper presents a new method of detecting on-road highway vehicles for active safety vehicle system. We combine a projective invariant technique with motion information to detect overtaking road vehicles. The vehicles are assumed into

a set of planes

and the invariant technique extracts the plane from the theory that a geometric invariant value defined by five points on a plane is preserved under a projective transform. Harris corners as a salient image point are used to give motion information with the normalized cross correlation centered at these points. A probabilistic criterion without demand of a heuristic factor is defined to test the similarity of invariant values between sequential frames. Because the method is very fast, real-time processing is possible for vehicle detection. Experimental results using images of real road scenes are presented.

Dong-Joong Kang, Jong-Eun Ha, Tae-Jung Lho

Robust Fault Matched Optical Flow Detection Using 2D Histogram

This paper propose an algorithm by which to achieve robust outlier detection without fitting camera models. This algorithm is applicable for cases in which the outlier rate is over 85%. If the outlier rate of optical flows is over 45%, then discarding outliers with conventional algorithms in real-time applications is very difficult. The proposed algorithm overcomes conventional difficulties by using a three-step algorithm: 1) construct a two-dimensional histogram with two axes having the lengths and directions of the optical flows; 2) sort the number of optical flows in each bin of the two-dimensional histogram in descending order, and remove bins having a lower number of optical flows than the given threshold: 3) increase the resolution of the two-dimensional histogram if the number of optical flows grouped in a specific bin is over 20%, and decrease the resolution if the number of optical flows is less than 10%. This process is repeated until the number of optical flows falls into a range of 10%-20%. The proposed algorithm works well on different kinds of images having many outliers. Experimental results are reported.

Jaechoon Chon, Hyongsuk Kim

Iris Recognition: Localization, Segmentation and Feature Extraction Based on Gabor Transform

Iris recognition is one of the best methods in the biometric field. It includes two main processes: “Iris localization and segmentation” and “Feature extraction and coding”. We have introduced a new method based on Gabor transform for localization and segmentation of iris in eye image and also have used it to implement an Iris Recognition system. By applying the Gabor transform to an eye image, some constant templates are extracted related to the borders of pupil and iris. These features are robust and almost easy to use. There is no restriction and no tuning parameter in algorithm. The algorithm is extremely robust to the eyelids and eyelashes occlusions. To evaluate the segmentation method, we have also developed a gradient based method. The results of experimentations show that our proposed algorithm works better than the gradient based algorithm. The results of our recognition system are also noticeable. The low FRR and FAR values justify the results of segmentation method. We have also applied different Gabor Wavelet filters for feature extraction. The observations show that the threshold used to discriminate feature vectors is highly dependant on the orientation, scale and parameters of the corresponding Gabor Wavelet Transform.

Mohammadreza Noruzi, Mansour Vafadoost, M. Shahram Moin

Optimal Edge Detection Using Perfect Sharpening of Ramp Edges

The strictly monotonic intensity profiles of ramp edges are mapped to a simple step function of intensity. Such a perfect sharpening of ramp edges can be used to define a differential operator nonlocally, in terms of which we can construct an edge detection algorithm to determine the edges of images effectively and adaptively to varying ramp widths.

Eun Mi Kim, Cherl Soo Pahk, Jong Gu Lee

Eye Tracking Using Neural Network and Mean-Shift

In this paper, an eye tracking method is presented using a neural network (NN) and mean-shift algorithm that can accurately detect and track user’s eyes under the cluttered background. In the proposed method, to deal with the rigid head motion, the facial region is first obtained using skin-color model and connected-component analysis. Thereafter the eye regions are localized using neural network (NN)-based texture classifier that discriminates the facial region into eye class and non-eye class, which enables our method to accurately detect users’ eyes even if they put on glasses. Once the eye region is localized, they are continuously and correctly tracking by mean-shift algorithm. To assess the validity of the proposed method, it is applied to the interface system using eye movement and is tested with a group of 25 users through playing a ‘aligns games.’ The results show that the system process more than 30 frames/sec on PC for the 320×240 size input image and supply a user-friendly and convenient access to a computer in real-time operation.

Eun Yi Kim, Sin Kuk Kang

The Optimal Feature Extraction Procedure for Statistical Pattern Recognition

The paper deals with the extraction of features for object recognition. Bayes’ probability of correct classification was adopted as the extraction criterion. The problem with full probabilistic information is discussed in detail. A simple calculation example is given and solved. One of the paper’s chapters is devoted to a case when the available information is contained in the so-called learning sequence (the case of recognition with learning).

Marek Kurzynski, Edward Puchala

A New Approach for Human Identification Using Gait Recognition

Recognition of a person from gait is a biometric of increasing interest. This paper presents a new approach on silhouette representation to extract gait patterns for human recognition. Silhouette shape of a motion object is first represented by four 1-D signals which are the basic image features called the distance vectors. The distance vectors are differences between the bounding box and silhouette. Second, eigenspace transformation based on Principal Component Analysis is applied to time-varying distance vectors and the statistical distance based supervised pattern classification is then performed in the lower-dimensional eigenspace for recognition. A fusion task is finally executed to produce final decision. Experimental results on three databases show that the proposed method is an effective and efficient gait representation for human identification, and the proposed approach achieves highly competitive performance with respect to the published gait recognition approaches.

Murat Ekinci

Erratum

A Security Requirement Management Database Based on ISO/IEC 15408

With the scale-spreading and diversification of information systems, security requirements for the systems are being more and more complicated. It is desirable to apply database technologies to information security engineering in order to manage the security requirements in design and development of the systems. This paper proposes a security requirement management database based on the international standard ISO/IEC 15408 that defines security functional requirements which should be satisfied by various information systems. The database can aid design and development of information systems that require high security such that it enables to suitably refer to required data of security requirements.

Shoichi Morimoto, Daisuke Horie, Jingde Cheng

Backmatter

Weitere Informationen