Skip to main content

2004 | Buch

Network and Parallel Computing

IFIP International Conference, NPC 2004, Wuhan, China, October 18-20, 2004. Proceedings

herausgegeben von: Hai Jin, Guang R. Gao, Zhiwei Xu, Hao Chen

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This proceedings contains the papers presented at the 2004 IFIP International Conference on Network and Parallel Computing (NPC 2004), held at Wuhan, China, from October 18 to 20, 2004. The goal of the conference was to establish an international forum for engineers and scientists to present their ideas and experiences in network and parallel computing. A total of 338 submissions were received in response to the call for papers. These papers werefrom Australia, Brazil,Canada,China, Finland, France, G- many, Hong Kong, India, Iran, Italy, Japan, Korea, Luxemburg, Malaysia, N- way, Spain, Sweden, Taiwan, UK, and USA. Each submission was sent to at least three reviewers.Each paper was judged according to its originality,inno- tion, readability, and relevance to the expected audience. Based on the reviews received, a total of 69 papers were accepted to be included in the proceedings. Among the 69 papers, 46 were accepted as full papers and were presented at the conference.Wealso accepted23papersasshortpapers;eachofthesepaperswas given an opportunity to have a brief presentation at the conference, followed by discussions in a poster session. Thus, due to the limited scope and time of the conference and the high number of submissions received, only 20% of the total submissions were included in the ?nal program.

Inhaltsverzeichnis

Frontmatter

Secure Grid Computing with Trusted Resources and Internet Datamining

Secure Grid Computing with Trusted Resources and Internet Datamining

Internet-based Grid computing is emerging as one of the most promising technologies that may change the world. Dr. Hwang and his research team at the University of Southern California (USC) are working on self-defense tools to protect Grid resources from cyber attacks or malicious intrusions, automatically. This project builds an automated intrusion response and trust management system to facilitate authentication, authorization, and security binding in using metacomputing Grids or peer-to-peer web services. The trusted GridSec infrastructure supports Internet traffic datamining, encrypted tunneling, optimized resource allocations, network flood control and anomaly detection, etc. The USC team is developing a NetShield library to protect Grid resources. This new security system adjusts itself dynamically with changing threat patterns and network traffic conditions. This project promotes the acceptance of Grid computing through international collaborations with the research groups in INRIA, France, Chinese Academy of Sciences, and Melbourne University. The fortified Grid infrastructure will benefit securitysensitive allocations in digital government, electronic commerce, anti-terrorism activities, and cyberspace crime control. The broader impacts of this ITR project are far reaching in an era of growing demand of Internet, Web and Grid services.

Kai Hwang
Towards Memory Oriented Scalable Computer Architecture and High Efficiency Petaflops Computing

The separation of processor logic and main memory is an artifact of the disparities of the original technologies from which each was fabricated more than fifty years ago as captured by the ”von Neumann architecture”. Appropriately, this separation is designated as ”the von Neumann bottleneck”. In recent years, the underlying technology constraint for the isolation of main memory from processing logic has been eliminated with the implementation of semiconductor fabrication foundries that permit the merger of both DRAM bit cells and CMOS logic on the same silicon dies. New classes of computer architecture are enabled by this opportunity including: 1) system on a chip where a conventional processor core with its layers of cache are connected to a block of DRAM on the same chip, 2) SMP on a chip where multiple conventional processor cores are combined on the same chip through a coherent cache structure, usually sharing the L3 cache implemented in DRAM, and 3) processor in memory where custom processing logic is positioned directly at the memory row buffer in a tightly integrated structure to exploit the short access latency and wide row of bits (typically 2K) for high memory bandwidth. This last, PIM, can take on remarkable physical structures and logical constructs and is the focus of the NASA Gilgamesh project to define and prototype a new class of PIM-based computer architecture that will enable a new scalable model of execution. The MIND processor architecture is the core of the Gilgamesh system that incorporates a distributed shared memory management scheme including in-memory virtual to physical address translation, a lightweight parcels message-driven mechanism for invoking remote transaction processing, multithreaded single cycle instruction issue for local resource management, graceful degradation for fault tolerance, and pinned threads for real time response. The MIND architecture for Gilgamesh is being developed in support of ”sea of PIMs” systems for both ground based Petaflops scale computers and scalable space borne computing for long term autonomous missions. One of its specific applications is in the domain of symbolic computing for knowledge management, learning, reasoning, and planning in a goal directed programming environment. This presentation will describe the MIND architecture being developed through the Gilgamesh project and its relation to the Cray Cascade Petaflops computer being developed for 2010 deployment under DARPA sponsorship.

Thomas Sterling
In-VIGO: Making the Grid Virtually Yours

Internet-based Grid computing is emerging as one of the most promising technologies that may change the world. Dr. Hwang and his research team at the University of Southern California (USC) are working on self-defense tools to protect Grid resources from cyber attacks or malicious intrusions, automatically. This project builds an automated intrusion response and trust management system to facilitate authentication, authorization, and security binding in using metacomputing Grids or peer-to-peer web services. The trusted GridSec infrastructure supports Internet traffic datamining, encrypted tunneling, optimized resource allocations, network flood control and anomaly detection, etc. The USC team is developing a NetShield library to protect Grid resources. This new security system adjusts itself dynamically with changing threat patterns and network traffic conditions. This project promotes the acceptance of Grid computing through international collaborations with the research groups in INRIA, France, Chinese Academy of Sciences, and Melbourne University. The fortified Grid infrastructure will benefit security-sensitive allocations in digital government, electronic commerce, anti-terrorism activities, and cyberspace crime control. The broader impacts of this ITR project are far reaching in an era of growing demand of Internet, Web and Grid services.

José Fortes
Productivity in HPC Clusters

This presentation discusses HPC productivity in terms of: (1) effective architectures, (2) parallel programming models, and (3) applications development tools. The demands placed on HPC by owners and users of systems ranging from public research laboratories to private scientific and engineering companies enrich the topic with many competing technologies and approaches. Rather than expecting to eliminate each other in the short run, these HPC competitors should be learning from one another in order to stay in the race. Here we examine how these competing forces form the engine of improvement for overall HPC cost/effectiveness. First, what will the effective architectures be? Moore’s law is likely to still hold at the processor level over the next few years. Those words are, of course, typical from a semiconductor manufacturer. More important for this conference, our roadmap projects that it will accelerate over the next couple of years due to Chip Multi Processors, CMPs. It has also been observed that cluster size has been growing at the same rate. Few people really know how successful the Grid and Utility Computing will be, but virtual organizations may add another level of parallelism to the problem solving process. Second, on parallel programming models, hybrid parallelism, i.e. parallelism at multiple levels with multiple programming models, will be used in many applications. Hybrid parallelism may emerge because application speedup at each level can be multiplied by future architectures. But, these applications can also adapt best to the wide variety of data and problems. Robustness of this type is needed to avoid high software costs of converting or incrementally tuning existing program. This leads to OpenMP, MPI, and Grid programming model investments. Third, application tools are needed for programmer productivity. Frankly, integrated programming environments have not made much headway in HPC. Tools for debugging and performance analysis still define the basic needs. The term debugging is used advised because there are limits to the scalability of debuggers in the amount of code and number of processors even today. How can we breakthrough? Maybe through automated tools for finding bugs at the threading and process level? Performance analysis capability similarly will be exceeded by the growth of hardware parallelism, unless progress is made.

Bob Kuhn
Whole-Stack Analysis and Optimization of Commercial Workloads on Server Systems

The evolution of the Web as an enabling tool for e-business introduces a challenge to understanding the execution behavior of large-scale middleware systems, such as J2EE [2], and their commercial workloads. This paper presents a brief description of the whole-stack analysis and optimization system – being developed at IBM Research – for commercial workloads on Websphere Application Server (WAS) [5] – IBM’s implementation of J2EE – running on IBM’s pSeries [4] and zSeries [3] server systems.

C. R. Attanasio, Jong-Deok Choi, Niteesh Dubey, K. Ekanadham, Manish Gupta, Tatsushi Inagaki, Kazuaki Ishizaki, Joefon Jann, Robert D. Johnson, Toshio Nakatani, Il Park, Pratap Pattnaik, Mauricio Serrano, Stephen E. Smith, Ian Steiner, Yefim Shuf

Session 1: Grid Computing

Fuzzy Trust Integration for Security Enforcement in Grid Computing

How to build the mutual trust among Grid resources sites is crucial to secure distributed Grid applications. We suggest enhancing the trust index of resource sites by upgrading their intrusion defense capabilities and checking the success rate of jobs running on the platforms. We propose a new fuzzy-logic trust model for securing Grid resources. Grid security is enforced through trust update, propagation, and integration across sites. Fuzzy trust integration reduces platform vulnerability and guides the defense deployment across Grid sites. We developed a SeGO scheduler for trusted Grid resource allocation.The SeGO scheduler optimizes the aggregate computing power with security assurance under fixed budget constraints. The effectiveness of the scheme was verified by simulation experiments. Our results show up to 90% enhancement in site security. Compared with no trust integration, our scheme leads to 114% improvement in Grid performance/cost ratio. The job drop rate reduces by 75%. The utilization of Grid resources increased to 92.6% as more jobs are submitted. These results demonstrate significant performance gains through optimized resource allocation and aggressive security reinforcement.

Shanshan Song, Kai Hwang, Mikin Macwan
Atomic Commitment in Grid Database Systems

Atomic Commitment Protocol (ACP) is an important part for any distributed transaction. ACPs have been proposed for homogeneous and heterogeneous distributed database management systems (DBMS). ACPs designed for these DBMS do not meet the requirement of Grid databases. Homogeneous DBMS are synchronous and tightly coupled while heterogeneous DBMS, like multidatabase systems, requires a top layer of multidatabase management system to manage distributed transactions. These ACPs either become too restrictive or need some changes in participating DBMS, which may not be acceptable in Grid Environment. In this paper we identify requirements for Grid database systems and then propose an ACP for grid databases, Grid- Atomic Commitment Protocol (Grid-ACP).

Sushant Goel, Hema Sharda, David Taniar
A Data Grid Security System Based on Shared Context

Data grid system supports uniform and secure access of heterogeneous distributed data resources across a range of administrative domains, each with its own local security policy. The security challenge has been a focus in a data grid environment. This paper mainly presents GridDaEn’s security mechanisms. In addition to the basic authentication and authorization functionality, it provides an integrated security strategy featured by shared context-based secure channel building to leverage security processing efficiency so as to improve interaction performance occurring among multiple domains in GridDaEn. Meanwhile, by means of proxy credential single-sign-on across multiple domains can be achieved. Experiments show that this approach can guarantee system security and reliability with great performance enhancement.

Nong Xiao, Xiaonian Wu, Wei Fu, Xiangli Qu
A Study on Performance Monitoring Techniques for Traditional Parallel Systems and Grid

During the last decade, much research has been done to evolve cost-effective techniques for performance monitoring and analysis of applications running on conventional parallel systems. With the advent of technology, new Grid-based systems emerged by the end of the last decade, and received attention as useful environments for high-performance computing. The requirements for discovering new techniques for monitoring the Grid have also come to the front. The objective of this paper is to study the techniques used by tools targeting traditional parallel systems and highlight the requirements and techniques used for Grid monitoring. We present case studies of some representative tools to get an understanding of both paradigms.

Sarbani Roy, Nandini Mukherjee
A Workflow-Based Grid Portal for Problem Solving Environment

In this paper, we present a Workflow-based grId portal for problem Solving Environment(WISE) which has been developed by integrating workflow, Grid and web technology to provide an enhanced powerful approach for problem solving environment. Workflow technology supports coordinated execution of multiple application tasks on Grid resources by enabling users to describe a workflow by composing many existing applications and new functions, and provides an easy powerful tool to create new Grid applications. We propose new Grid portal to allow us to use Grid resources with improved workflow patterns to represent various parallelisms inherent in parallel and distributed Grid applications and present Grid Workflow Description Language(GWDL) to specify our new workflow patterns. Also, We shall show that the MVC(Model View Control) design pattern and multi-layer architecture provides modularity and extensibility to WISE by separating the application engine control and presentation from the application logic for Grid services, and the Grid portal service from Grid service interface.

Yong-Won Kwon, So-Hyun Ryu, Jin-Sung Park, Chang-Sung Jeong
A Real-Time Transaction Approach for Grid Services: A Model and Algorithms

Because transactions in Grid applications often have deadlines, effectively processing real-time transactions in Grid services presents a challenging task. Although real-time transaction techniques have been well studied in databases, they can not be directly applied to the Grid applications due to the characteristics of Grid services. In this paper, we propose an effective model and corresponding coordination algorithms to handle real-time transactions for Grid services. The model can intelligently discover required Grid services to process specified sub-transactions at runtime, and invoke the algorithms to coordinate these services to satisfy the transactional and real-time requirements, without users involvement in the complex process. We use a Petri net to validate the model and algorithms.

Feilong Tang, Minglu Li, Joshua Zhexue Huang, Lei Cao, Yi Wang
QoS Quorum-Constrained Resource Management in Wireless Grid

In wireless Grid computing environment, end-to-end Quality of Service (QoS) could be very complex, and this highlights the increasing requirement for the management of QoS itself. Policy quorum-based management offers a more flexible, customizable management solution that allows controlled elements to be configured or scheduled on the fly, for a specific requirement tailored for a customer. This paper presents a QoS guaranteed management scheme, Policy Quorum Resource Management (PQRM), which facilitates the reliable scheduling of the wireless Grid system with dynamic resource management architecture aimed at fulfilling the QoS requirements. Experimental results show the proposed PQRM with resource reconfiguration scheme improves both performance and stability, which is suitable for a wireless Grid services.

Chan-Hyun Youn, Byungsang Kim, Dong Su Nam, Eung-Suk An, Bong-Hwan Lee, Eun Bo Shim, Gari Clifford
A Data-Aware Resource Broker for Data Grids

The success of grid computing depends on the existence of grid middleware that provides core services such as security, data management, resource information, and resource brokering and scheduling. Current general-purpose grid resource brokers deal only with computation requirements of applications, which is a limitation for data grids that enable processing of large scientific data sets. In this paper, a new data-aware resource brokering scheme, which factors both computational and data transfer requirements into its cost models, has been implemented and tested. The experiments reported in this paper clearly demonstrate that both factors should be considered in order to efficiently schedule data intensive tasks.

Huy Le, Paul Coddington, Andrew L. Wendelborn
Managing Service-Oriented Grids: Experiences from VEGA System Software

Recent work towards a standard based, service oriented Grid represents the convergence of distributed computing technologies from e-Science and e-Commerce communities. A global service management system in such context is required to achieve efficient resource sharing and collaborative service provisioning. This paper presents VEGA system software, a novel grid platform that combines experiences from operating system design and current IP networks. VEGA is distinguished by its two design principles: a) ease of use. By service virtualization, VEGA provides users such a location-independent way to access services that grid applications could transparently benefit from systematic load balancing and error recovery. b) ease of deployment. The architecture of VEGA is fully decentralized without sacrificing efficiency, thanks to the mechanism of resource routing. The concept of grid adapters is developed to make joining and accessing the Grid a ’plug-and-play’ process. VEGA has been implemented on Web Service platforms and all its components are OGSI-compliant services. To evaluate the overhead and performance of VEGA, we conducted experiments of two categories of benchmark set in a real size Grid environment. The results have shown that VEGA provides an efficient service discovery and selection framework with a reasonable overhead.

YuZhong Sun, Haiyan Yu, JiPing Cai, Li Zha, Yili Gong
An Efficient Parallel Loop Self-scheduling on Grid Environments

The approaches to deal with scheduling and load balancing on PC-based cluster systems are famous and well-known. Self-scheduling schemes, which are suitable for parallel loops with independent iterations on cluster computer system, they have been designed in the past. In this paper, we propose a new scheme that can adjust the scheduling parameter dynamically on an extremely heterogeneous PC-based cluster and grid computing environments in order to improve system performance. A grid computing environment consists of multiple PC-based clusters is constructed using Globus Toolkit and SUN Grid Engine middleware. The experimental results show that our scheduling can result in higher performance than other similar schemes.

Chao-Tung Yang, Kuan-Wei Cheng, Kuan-Ching Li
ALiCE: A Scalable Runtime Infrastructure for High Performance Grid Computing

This paper discusses a Java-based grid computing middleware, ALiCE, to facilitate the development and deployment of generic grid applications on heterogeneous shared computing resources. The ALiCE layered grid architecture comprises of a core layer that provides the basic services for control and communication within a grid. Programming template in the extensions layer provides a distributed shared-memory programming abstraction that frees the grid application developer from the intricacies of the core layer and the underlying grid system. Performance of a distributed Data Encryption Standard (DES) key search problem on two grid configurations is discussed.

Yong-Meng Teo, Xianbing Wang
Mapping Publishing and Mapping Adaptation in the Middleware of Railway Information Grid System

When adopting the mediator architecture to integrate distributed, autonomous, relational model based database sources, mappings from the source schema to the global schema may become inconsistent when the relational source schema or the global schema evolves. Without mapping adaptation, users may access no data or wrong data. In the paper, we propose a novel approach the global attribute as view with constraints (GAAVC) to publish mappings, which is adaptive for the schema evolution. Also published mappings satisfy both source schema constraints and global schema constraints, which enable users to get valid data. We also put forward the GAAVC based mapping publishing algorithm and mapping adaptation algorithms. When we compare our approach with others in functionality, it outperforms. Finally the mapping adaptation tool GMPMA is introduced, which has been implemented in the middleware of railway information grid system.

Ganmei You, Huaming Liao, Yuzhong Sun
A Heuristic Algorithm for the Job Shop Scheduling Problem

The job shop scheduling problem that is concerned with minimizing makespan is discussed. A new heuristic algorithm that embeds an improved shifting bottleneck procedure into the Tabu Search (TS) technique is presented. This algorithm is different from the previous procedures, because the improved shifting bottleneck procedure is a new procedure for the problem, and the two remarkable strategies of intensification and diversification of TS are modified. In addition, a new kind of neighborhood structure is defined and the method for local search is different from the previous.This algorithm has been tested on many common problem benchmarks with various sizes and levels of hardness and compared with several other algorithms. Computational experiments show that this algorithm is one of the most effective and efficient algorithms for the problem. Especially, it obtains a lower upbound for an instance with size of 50 jobs and 20 machines within a short period.

Ai-Hua Yin
Coordinating Distributed Resources for Complex Scientific Computation

There exist a large number of computational resources in the domain of scientific and engineering computation. They are distributed, heterogeneous and often too restricted in computability for oneself to satisfy the requirement of modern scientific problems. To address this challenge, this paper proposes a component-based architecture for managing and accessing legacy applications on the computational grid. It automatically schedules legacies with domain expertise, and coordinates them to serve large-scale scientific computation. A prototype has been implemented to evaluate the architecture.

Huashan Yu, Zhuoqun Xu, Wenkui Ding
Efficiently Rationing Resources for Grid and P2P Computing

As Grid and P2P computing become more and more popular, many schedule algorithms based on economics rather than traditional pure computing theory have been proposed. Such algorithms mainly concern balancing resource supply and resource demand via economic measures. As we know, fairness and efficiency are two conflicting goals. In this paper, we argue that overbooking resources can greatly improve usage rates of resources and simultaneously reduce responsive time of tasks by shortening schedule time especially under extreme overload, while maintaining schedule principles. This is accomplished by scheduling more eligible tasks above resource capacity in advance, therefore overbooking the resource. We verify our claim on Grid Market[1].

Ming Chen, Yongwei Wu, Guangwen Yang, Xuezheng Liu
Grid Resource Discovery Model Based on the Hierarchical Architecture and P2P Overlay Network

The Grid technology enables large-scale sharing and coordinated use of networked resources. The kernel of computational Grid is resource sharing and cooperating in wide area. In order to obtain better resource sharing and cooperating, discovering resource must be efficient. In this paper, we propose a Grid resource discovery model that utilizes the flat and fully decentralized P2P overlay networks and hierarchical architecture to yield good scalability and route performance. Our model adapts efficiently when individual node joins, leaves or fails. Both the theoretical analysis and the experimental results show that our model is efficient, robust and easy to implement.

Fei Liu, Fanyuan Ma, Shui Yu, Minglu Li
Collaborative Process Execution for Service Composition with StarWebService

This paper gives a brief introduction to a collaborative process execution mechanism for service composition implemented in StarWebService system. The main idea is to partition the global process model of composite service into local process models for each participant, so as to enable distributed control and direct data exchange between participants. It is a novel solution to the issues of scalability and autonomy of centralized execution.

Bixin Liu, YuFeng Wang, Bin Zhou, Yan Jia

Session 2: Peer-to-Peer Computing

Efficient Gnutella-like P2P Overlay Construction

Without assuming any knowledge of the underlying physical topology, the conventional P2P mechanisms are designed to randomly choose logical neighbors, causing a serious topology mismatch problem between the P2P overlay network and the underlying physical network. This mismatch problem incurs a great stress in the Internet infrastructure and adversely restraints the performance gains from the various search or routing techniques. In order to alleviate the mismatch problem, reduce the unnecessary traffic and response time, we propose two schemes, namely, location-aware topology matching (LTM) and scalable bipartite overlay (SBO) techniques. Both LTM and SBO achieve the above goals without bringing any noticeable extra overheads. More-over, both techniques are scalable because the P2P over-lay networks are constructed in a fully distributed manner where global knowledge of the network is not necessary. This paper demonstrates the effectiveness of LTM and SBO, and compares the performance of these two approaches through simulation studies.

Yunhao Liu, Li Xiao, Lionel M. Ni, Baijian Yang
Lookup-Ring: Building Efficient Lookups for High Dynamic Peer-to-Peer Overlays

This paper is motivated by the problem of poor searching efficiency in decentralized peer-to-peer file-sharing systems. We solve the searching problem by considering and modeling the basic trade-off between forwarding queries among peers and maintaining lookup tables in peers, so that we can utilize optimized lookup table scale to minimize bandwidth consumption, and to greatly improve the searching performance under arbitrary system parameters and resource constraints (mainly the available bandwidth). Based on the model, we design a decentralized peer-to-peer searching strategy, namely the Lookup-ring, which provides very efficient keyword searching in high dynamic peer-to-peer environments. The simulation results show that Lookup-ring can easily support a large-scale system with more than 106 participating peers at a very small cost in each peer.

Xuezheng Liu, Guangwen Yang, Jinfeng Hu, Ming Chen, Yongwei Wu
Layer Allocation Algorithms in Layered Peer-to-Peer Streaming

In layered peer-to-peer streaming, as the bandwidth and data availability (number of layers received) of each peer node are constrained and heterogeneous, the media server can still become overloaded, and the streaming qualities of the receiving nodes may also be constrained for the limited number of supplying peers. This paper identifies these issues and presents two layer allocation algorithms, which aim at the two scenarios of the available bandwidth between the media server and the receiving node, to reach either or two of the goals: 1) minimize the resource consumption of the media server, 2) maximize the streaming qualities of the receiving nodes. Simulation results demonstrate the efficiencies of the proposed algorithms.

Yajie Liu, Wenhua Dou, Zhifeng Liu
Build a Distributed Repository for Web Service Discovery Based on Peer-to-Peer Network

While Web Services already provide distributed operation execution, the registration and discovery with UDDI is still based on a centralized repository. In this paper we propose a distributed XML repository, based on a Peer-to-Peer infrastructure called pXRepository for Web Service discovery. In pXRepository, the service descriptions are managed in a completely decentralized way. Moreover, since the basic Peer-to-Peer routing algorithm cannot be applied directly in the service discovery process, we extend the basic Peer-to-Peer routing algorithm with XML support, which enables pXRepository to support XPath-based composite queries. Experimental results show that pXRepository has good robustness and scalability.

Yin Li, Futai Zou, Fanyuan Ma, Minglu Li
Performance Improvement of Hint-Based Locating & Routing Mechanism in P2P File-Sharing Systems

Hint-based Locating & Routing Mechanism (HBLR) derives from the locating & routing mechanism in Freenet. HBLR uses file location hint to enhance the performance of file searching and downloading. In comparison with its ancestor, HBLR saves storage space and reduces file request latency. However, because of the inherent fallibility of hint, employing location hint naively for file locating in P2P file-sharing system will lead to under-expectant performance. In this paper, hint’s uncertainty and related bad results are analyzed. According to the causation, pertinent countermeasures including credible hint, master-copy transfer, and file existence pre-detection are proposed. Simulation shows the performance of HBLR is improved by adopting the proposed policies.

Hairong Jin, Shanping Li, Gang Peng, Tianchi Ma

Session 3: Web Techniques

Cache Design for Transcoding Proxy Caching

As audio and video applications have proliferated on the Internet, transcoding proxy caching is attracting an increasing amount of attention, especially in the environment of mobile appliances. Since cache replacement and consistency algorithms are two factors that play a central role in the functionality of transcoding proxy caching, it is of particular practical necessity to involve them into transcoding cache design. In this paper, we propose an original cache maintenance algorithm, which integrates both cache replacement and consistence algorithms. Our algorithm also explicitly reveals the new emerging factors in the transcoding proxy. Specifically, we formulate a generalized cost saving function to evaluate the profit of caching a multimedia object. Our algorithm evicts the objects based on the generalized cost saving to fetch each object into the cache. Consequently, the objects with less generalized cost saving are to be removed from the cache. On the other hand, our algorithm also considers the validation and write rates of the objects, which is of considerable importance for a cache maintenance algorithm. Finally, we evaluate our algorithm on different performance metrics through extensive simulation experiments. The implementation results show that our algorithm outperforms comparison algorithms in terms of the performance metrics considered.

Keqiu Li, Hong Shen, Keishi Tajima
Content Aggregation Middleware (CAM) for Fast Development of High Performance Web Syndication Services

The rapid expansion of the Internet accompanies a serious side effect. Since there are too many information providers, it is very difficult to obtain the contents best fitting to customers’ needs. Web Syndication Services (WSS) are emerging as solutions to the information flooding problem. How-ever, even with its practical importance, WSS has not been much studied yet. In this paper, we propose the Content Aggregation Middleware (CAM). It provides a WSS with a content gathering substratum effective in gathering and processing data from many different source sites. Using the CAM, WSS provider can build up a new service without involving the details of complicated content aggregation procedures, and thus concentrate on developing the service logic. We describe the design, implementation, and performance of the CAM.

Su Myeon Kim, Jinwon Lee, SungJae Jo, Junehwa Song
Domain-Based Proxy for Efficient Location Tracking of Mobile Agents

The provision of location tracking for mobile agents is designed to deliver a message to a moving object in a network. Most tracking methods exploit relay stations that hold location information to forward messages to a target mobile agent. In this paper, we propose an efficient location tracking method for mobile agents using the domain-based proxy as a relay station. The proxy in each domain is dynamically determined when a mobile agent enters a new domain. The proposed method exploits the domain-based moving patterns of mobile agents and minimizes registration and message transfer costs in mobile agent systems.

Sanghoon Song, Taekyoung Kwon

Session 4: Cluster Computing

Inhambu: Data Mining Using Idle Cycles in Clusters of PCs

In this paper we present and evaluate Inhambu, a distributed object-oriented system that relies on dynamic monitoring to collect information about the availability of computational resources, providing the necessary support for the execution of data mining applications on clusters of PCs and workstations. We also describe a modified implementation of the data mining tool Weka, which executes the cross validation procedure in parallel with the support of Inhambu. We present preliminary tests, showing that performance gains can be obtained for computationally expensive data mining algorithms, even when running with small datasets.

Hermes Senger, Eduardo R. Hruschka, Fabrício A. B. Silva, Liria M. Sato, Calebe P. Bianchini, Marcelo D. Esperidião
Request Distribution for Fairness with a New Load-Update Mechanism in Web Server Cluster

The complexity of services and applications provided by Web sites is ever increasing as integration of traditional Web publishing sites with new paradigms, i.e., e-commerce. Each dynamic Web page is difficult to estimate its execution load even with the information from the application layer. In this paper the execution latency at Web server is exploited in order to balance loads from dynamic Web pages. With only the information such as IP address and port number for Layer-4 Web switch the proposed algorithm balances the loads with a new load-update mechanism. The mechanism uses the report packets more efficiently with the same communication cost. Moreover the proposed algorithm considers the fairness for Web clients hence the Web clients would experience higher quality of service.

MinHwan Ok, Myong-soon Park
Profile Oriented User Distributions in Enterprise Systems with Clustering

As enterprises world-wide racing to embrace real-time management to improve productivities, customer services, and flexibility, many resources have been invested in enterprise systems (ESs).All modern ESs adopt an n-tier client-server architecture that includes several application servers to hold users and applications. As in any other multi-server environments, the load distributions, and user distributions in particular, becomes a critical issue in tuning system performances.Although n-tier architecture may involve web servers, no literatures in Distributed Web Server Architectures have considered the effects of distributing users instead of individual requests to servers. The algorithm proposed in this paper return specific suggestions, including explicit user distributions, the number of servers needed, the similarity of user requests in each server. The paper also discusses how to apply the knowledge of past patterns to allocate new users, who have no request patterns, in a hybrid dispatching program.

Ping-Yu Hsu, Ping-Ho Ting
CBBS: A Content-Based Bandwidth Smoothing Scheme for Clustered Video Servers

Due to the inherent bandwidth burstness, variable-bit-rate (VBR) encoded videos are very difficult to be effectively transmitted over clustered video servers for achieving high server bandwidth utilization and efficiency while guaranteeing QoS. Previous bandwidth smoothing schemes tend to cause long initial delay, data loss and playback jitters. In this paper, we propose a content-based bandwidth smoothing scheme, called CBBS, which first splits video objects into lots of small segments based on the complexity of picture content in different visual scenes so that one segment is exactly including one visual scene, and then, for each segment, a constant bit rate is allocated to transfer it. Performance evaluation based on real-life MPEG-IV traces shows that CBBS scheme can significantly improve the server bandwidth utilization and efficiency, while the initial delay and client buffer occupancy are also significantly reduced.

Dafu Deng, Hai Jin, Xiaofei Liao, Hao Chen
SuperNBD: An Efficient Network Storage Software for Cluster

Networked storage has become an increasingly common and essential component for cluster. In this environment, the network storage software through which the client nodes can directly access remote network attached storage is an important and critical requisite. There are many implementations exist with this function, such as iSCSI. However, they are not tailored for the large-scale cluster environment and cannot well satisfy its high efficiency and scalability requirements. In this paper, we present a more efficient technology for network storage in cluster and also give detailed evaluation for it through our implementation – SuperNBD.The results indicate that SuperNBD is more efficient, more scalable, and better fit for cluster environment.

Rongfeng Tang, Dan Meng, Jin Xiong
I/O Response Time in a Fault-Tolerant Parallel Virtual File System

A fault tolerant parallel virtual file system is designed and implemented to provide high I/O performance and high reliability. A queuing model is used to analyze in detail the average response time when multiple clients access the system. The results show that I/O response time is with a function of several operational parameters. It decreases with the increase in I/O buffer hit rate for read requests, write buffer size for write requests and number of server nodes in the parallel file system, while higher I/O requests arrival rate increases I/O response time.

Dan Feng, Hong Jiang, Yifeng Zhu
SARCNFS: Self-Adaptive Redundancy Clustered NAS File System

In this paper, we describe the design and implementation of SARCNFS File System for network-attached clustered storage system. SARCNFS stripes the data and metadata among multiple NAS nodes, and provides file redundancy scheme and synchronization mechanism for distributed RAID5. SARCNFS uses a self-adaptive redundancy scheme for file data accesses that uses RAID5-level for large writes, and RAID1-level for small write so as to dynamically provide flexible switch between RAID1 and RAID5 to provide the best performance. In addition, SARCNFS proposed a simple distributed locking mechanism that uses RAID5-level for full stripe writes, and RAID1-level for temporary storing data from partial stripe updates. As a result, low response latency, high performance and strong reliability are achieved.

Cai Bin, Changsheng Xie, Ren Jin, FaLing Yi
Research and Implementation of a Snapshot Facility Suitable for Soft-Failure Recovery

Human error and incorrect software (a.k.a. soft-failure) are key impediments to dependability of Internet services. To address the challenge, storage providers need to provide rapid recovery techniques to retrieve data from a time-based recovery point. Motivated by it, a snapshot facility at the block level called SnapChain is introduced. Compared with former implementations, when managing different versions of snapshots, SnapChain minimizes disk space requirement and write penalty of master volume. In this paper, the metadata and the algorithms used in SnapChain will be explained.

Yong Feng, Yan-yuan Zhang, Rui-yong Jia

Session 5: Parallel Programming and Environment

GOOMPI: A Generic Object Oriented Message Passing Interface
Design and Implementation

This paper discusses the application of object-oriented and generic programming techniques in high performance parallel computing, then presents a new message-passing interface based on object-oriented and generic programming techniques — GOOMPI, describes its design and implementation issues, shows its values in designing and implementing parallel algorithms or applications based on the message-passing model through typical examples. This paper also analyzes the performance of our GOOMPI implementation.

Zhen Yao, Qi-long Zheng, Guo-liang Chen
Simulating Complex Dynamical Systems in a Distributed Programming Environment

This paper describes a rule-based generic programming and simulation paradigm, for conventional hard computing and soft and innovative computing e.g., dynamical, genetic, nature inspired self-organized criticality and swarm intelligence. The computations are interpreted as the outcome arising out of deterministic, non-deterministic or stochastic interaction among elements in a multiset object space that includes the environment. These interactions are like chemical reactions and the evolution of the multiset can mimic the evolution of the complex system. Since the reaction rules are inherently parallel, any number of actions can be performed cooperatively or competitively among the subsets of elements. This paradigm permits carrying out parts or all of the computations independently in a distributed manner on distinct processors and is eminently suitable for cluster and grid computing.

E. V. Krishnamurthy, Vikram Krishnamurthy
Design and Implementation of a Remote Debugger for Concurrent Debugging of Multiple Processes in Embedded Linux Systems

In the embedded software development environments, developers can concurrently debug a running process and its child processes only by using multiple gdbs and gdbservers. But it needs additional coding and messy works of activating additional gdb and gdbserver for each created process. In this paper, we propose an efficient mechanism for concurrent debugging of multiple remote processes in the embedded system environments by using the library wrapping mechanism without Linux kernel modification. Through the experimentation of debugging two processes communicating by an unnamed pipe in the target system, we show that our proposed debugging mechanism is easier and more efficient than preexisting mechanisms.

Jung-hee Kim, Hyun-chul Sim, Yong-hyeog Kang, Young Ik Eom

Session 6: Network Architecture

Performance Evaluation of Hypercubes in the Presence of Multiple Time-Scale Correlated Traffic

The efficiency of a large-scale parallel computer is critically dependent on the performance of its interconnection network. Analytical modelling plays an important role towards obtaining a clear understanding of network performance under various design spaces. This paper proposes an analytical performance model for circuit-switched hypercubes in the presence of multiple time-scale correlated traffic which can appear in many parallel computation environments and has strong impact on network performance. The tractability and reasonable accuracy of the analytical model demonstrated by simulation experiments make it a practical and cost-effective evaluation tool to investigate network performance under different system configurations.

Geyong Min, Mohamed Ould-Khaoua
Leader Election in Hyper-Butterfly Graphs

Leader election in a network is one of the most important problems in the area of distributed algorithm design. Consider any network of N nodes; a leader node is defined to be any node of the network unambiguously identified by some characteristics (unique from all other nodes). A leader election process is defined to be a uniform algorithm (code) executed at each node of the network; at the end of the algorithm execution, exactly one node is elected the leader and all other nodes are in the non-leader state [GHS83, LMW86, Tel93, Tel95a, SBTS01]. In this paper, our purpose is to propose an election algorithm for the oriented hyper butterfly networks with ${\mathcal O}({N}{\log N})$ messages.

Wei Shi, Pradip K. Srimani
A New Approach to Local Route Recovery for Multihop TCP in Ad Hoc Wireless Networks

The TCP (Transmission Control Protocol) is a critical component in an ad hoc wireless network because of its pervasive usage in various important applications. However, TCP’s congestion control mechanism is notoriously ineffective in dealing with time-varying channel errors even for a single wireless link. Indeed, the adverse effects of the inefficient usage of wireless bandwidth due to the large TCP timers are more profound in a multihop communication session. In this paper, we design and evaluate local recovery (LR) approaches for maintaining smooth operations of a multihop TCP session in an ad hoc network. Based on our NS-2 simulation results, we find that using the proposed LR approaches is better than using various well-known ad hoc routing algorithms which construct completely new routes.

Zhi Li, Yu-Kwong Kwok
Graph-Theoretic Analysis of Kautz Topology and DHT Schemes

Many proposed distributed hash table (DHT) schemes for peer-to-peer network are based on some traditional parallel interconnection topologies. In this paper, we show that the Kautz graph is a very good static topology to construct DHT schemes. We demonstrate the optimal diameter and optimal fault tolerance properties of the Kautz graph and prove that the Kautz graph is (1+o(1))-congestion-free when using the long path routing algorithm. Then we propose FissionE, a novel DHT scheme based on Kautz graph. FissionE is a constant degree, O(log N) diameter and (1+o(1))-congestion-free. FissionE shows that the DHT scheme with constant degree and constant congestion can achieve O(log N) diameter, which is better than the lower bound Ω(N1/d) conjectured before.

Dongsheng Li, Xicheng Lu, Jinshu Su
A Parameterized Model of TCP Slow Start

Based on analysis on multiple packet losses of standard slow start caused by exponential growth of congestion window (cwnd), this paper proposes a new phase-divided TCP start scheme and designs a parameterized model to reduce packet losses and improve TCP performance. This scheme employs different of cwnd growth rules while cwnd is under and over the value of half window threshold (ssthresh) respectively, namely exponential growth and negatively exponential growth, which greatly decreases probability of multiple packet losses from a window of data and guarantees that a connection smoothly joins the Internet and transforms into congestion avoidance. Parameterized model adjusts the duration of slow start and acceleration of increasing cwnd to improve performance of slow start phase through various parameter setting. An adaptive paraeter setting method is designed. And the simulation results show that this new method significantly decreaseds packet losses and improves the stability of TCP and performance of slow start, and also achieves good fairness and friendliness to other TCP connections.

Xiaoheng Deng, Zhigang Chen, Lianming Zhang
SMM: A Truthful Mechanism for Maximum Lifetime Routing in Wireless Ad Hoc Networks

As an important metric for wireless ad hoc networks, network lifetime has received great attentions in recent years. Existing lifetime-aware algorithms have an implicit assumption that nodes are cooperative and truthful, and they cannot work properly when the network contains selfish nodes. To make these algorithms achieve their design objectives even in the presence of selfish nodes, in this paper, we propose a truthful mechanism Second-Max-Min (SMM) based on the analysis of current algorithms as well as a DSR-like routing protocol for the mechanism implementation. In SMM mechanism, the source node gives appropriate payments to relay nodes, and the payments are related to the path which has the second maximum lifetime in all possible paths. We show that the payment ratio is relatively small due to the nature of lifetime-aware routing algorithms, which is confirmed by experiments.

Qing Zhang, Huiqiong Chen, Weiwei Sun, Bole Shi
A Dioid Linear Algebra Approach to Study a Class of Continuous Petri Nets

Continuous Event Graphs (CEGs), a subclass of Continuous Petri Nets, are defined as the limiting cases of timed event graphs and Timed Event Multigraphs. A set of dioid algebraic linear equations will be inferred as a novel method of analyzing a special class of CEG, if treated the cumulated token consumed by transitions as state-variables, endowed the monotone nondecreasing functions pointwise minimum as addition, and endowed the lower-semicontinuous mappings, from the collection of monotone nondecreasing functions to itself, the pointwise minimum as addition and composition of mappings as multiplication. As a new modeling approach, it clearly illustrate characteristic of continuous events. Based on the algebraic model, an example of optimal Control is demonstrated.

Duan Zhang, Huaping Dai, Youxian Sun
A Fully Adaptive Fault-Tolerant Routing Methodology Based on Intermediate Nodes

Massively parallel computing systems are being built with thousands of nodes. Because of the high number of components, it is critical to keep these systems running even in the presence of failures. Interconnection networks play a key-role in these systems, and this paper proposes a fault-tolerant routing methodology for use in such networks. The methodology supports any minimal routing function (including fully adaptive routing), does not degrade performance in the absence of faults, does not disable any healthy node, and is easy to implement both in meshes and tori. In order to avoid network failures, the methodology uses a simple mechanism: for some source-destination pairs, packets are forwarded to the destination node through a set of intermediate nodes (without being ejected from the network). The methodology is shown to tolerate a large number of faults (e.g., five/nine faults when using two/three intermediate nodes in a 3D torus). Furthermore, the methodology offers a gracious performance degradation: in an 8 × 8 × 8 torus network with 14 faults the throughput is only decreased by 6.49%.

N. A. Nordbotten, M. E. Gómez, J. Flich, P. López, A. Robles, T. Skeie, O. Lysne, J. Duato
Extended DBP for (m,k)-Firm Based QoS

In this paper, an extended DBP (E_DBP) scheme is studied for (m,k)-firm constraint. The basic idea of the proposed algorithm takes into account the distance to exit a failure state, which is a symmetrical notion of distance to fall into a failure state in DBP. Quality of Service (QoS) in terms of dynamic failure and delay is evaluated. Simulation results reveal the effectiveness of E_DBP to provide better QoS.

Jiming Chen, Zhi Wang, Yeqiong Song, Youxian Sun
Weighted Fair Scheduling Algorithm for QoS of Input-Queued Switches

The high speed network usually deals with two main issues. The first is fast switching to get good throughput. At present, the state-of-the-art switches are employing input queued architecture to get high throughput. The second is providing QoS guarantees for a wide range of applications. This is generally considered in output queued switches. For these two requirements, there have been lots of scheduling mechanisms to support both better throughput and QoS guarantees in high speed switching networks. In this paper, we present a scheduling algorithm for providing QoS guarantees and higher throughput in an input queued switch. The proposed algorithm, called Weighted Fair Matching (WFM), which provides QoS guarantees without output queues, i.e., WFM is a flow based algorithm that achieves asymptotically 100% throughput with no speed up while providing QoS.

Sang-Ho Lee, Dong-Ryeol Shin, Hee Yong Youn
A Scalable Distributed Architecture for Multi-party Conferencing Using SIP

As various multimedia communication services are increasingly required by Internet users, several signaling protocols have been proposed for the efficient control of complex multimedia communication services. However, the model and architecture of multi-party conferencing which is currently being standardized by IETF has some limitation in scalability to meet the requirement for the management of large-scale multimedia conferencing service. In this article, we have presented a new scalable distributed architecture for the efficient management of large-scale multimedia conferencing service which is based on SIP. The high scalability is achieved by adding, deleting and modifying the multiple mixers and composing conference server network in a distributed way, in a real-time, and without disruption of services. The SIP-based control mechanism for achieving the scalability has been designed in detail. Finally, the performance of the proposed architecture has been evaluated by simulation.

Young-Hoon Cho, Moon-Sang Jeong, Jong-Tae Park
DC-mesh: A Contracted High-Dimensional Mesh for Dynamic Clustering

This paper proposes a DC-mesh network that allows requesting nodes to be put into clusters while the requests are sent to a target node, as well as is easy to layout on an LSI chip. To organize the DC-mesh, we use the partitioning in the word space based on the Hamming code [1]. We introduce an index scheme, (parity-value,information-value), in the word space, and map it onto a 4-D (dimensional) mesh so that the Hamming distance between the words in each partition is preserved in the Manhattan distance between the corresponding nodes on the mesh; two of the dimensions are contracted for easy wiring. The resultant DC-mesh consists of a number of local 2-D meshes and a single global 2-D mesh; all processing nodes linked to one local mesh are connected to one node of the global mesh via a bus to compensate for the contraction. A subset of the nodes in a partition organizes a dynamic cluster. The diameter equals the greater of the diameters of local and global meshes.

Masaru Takesue
The Effect of Adaptivity on the Performance of the OTIS-Hypercube Under Different Traffic Patterns

The OTIS-hypercube is an optoelectronic architecture for inter-connecting the processing nodes of a multiprocessor system. In this paper, an empirical performance evaluation of the OTIS-hypercube is conducted for different traffic patterns and routing algorithms. It is shown that, depending on the traffic pattern, minimal path routing may not have the best performance and that adaptivity may be of no improvement. All judgments made are based on observations from the results of extensive simulation experiments of the interconnection network. In addition, logical explanations are suggested for the cause of certain noticeable performance characteristics.

H. H. Najaf-abadi, H. Sarbazi-Azad
An Empirical Autocorrelation Form for Modeling LRD Traffic Series

Paxson and Floyd (IEEE/ACM T. Netw. 1995) remarked the limitation of fractional Gaussian noise (FGN)) in accurately modeling LRD network traffic series. Beran (1994) suggested developing a sufficient class of parametric correlation form for modeling whole correlation structure of LRD series. M. Li (Electr. Letts., 2000) gave an empirical correlation form. This paper extends Li’s previous letter by analyzing it in Hilbert space and showing its flexibility in data modeling by comparing it with FGN (a commonly used traffic model). The verifications with real traffic suggest that the discussed correlation structure can be used to flexibly model LRD traffic series.

Ming Li, Jingao Liu, Dongyang Long
Statistical Error Analysis on Recording LRD Traffic Time Series

Measurement of LRD traffic time series is the first stage to experimental research of traffic patterns. From a view of measurement, if the length of a measured series is too short, an estimate of a specific objective (e.g., auto-correlation function) may not achieve a given accuracy. On the other hand, if a measured series is over-long, it will be too much for storage space and cost too much computation time. Thus, a meaningful issue in measurement is how to determine the record length of an LRD traffic series with a given degree of accuracy of the estimate of interest. In this paper, we present a formula for requiring the record length of LRD traffic series according to a given bound of accuracy of autocorrelation function estimation of fractional Gaussian noise and a given value of H. Further, we apply our approach to assessing some widely used traces in the traffic research, giving a theoretical evaluation of those traces from a view of statistical error analysis.

Ming Li
Load Balancing Routing in Low-Cost Parallel QoS Sensitive Network Architecture

A low-cost parallel QoS Sensitive domain, which supports load balancing network architecture, is developed in the paper. To deal with the scaling problem, a large network, thus, is structured by grouping nodes into parallel domains. Different from traditional approaches, especially hierarchical structure, parallel structure aggregates the topologies of domains in a new way. The corresponding routing algorithm, which adopts two skills for low-cost routing, QoS line segment and swapping matrix, is to emphasize particularly on load balancing within networks. Finally, Simulation results show appealing performances in terms of different metrics.

Furong Wang, Ye Wu

Session 7: Network Security

Reversible Cellular Automata Based Encryption

In this paper cellular automata (CA) are applied to construct a symmetric-key encryption algorithm. A new block cipher based on one dimensional, uniform and reversible CA is proposed. A class of CA with rules specifically constructed to be reversible is used. The algorithm uses 224 bit key. It is shown that the algorithm satisfies safety criterion called Strict Avalanche Criterion. Due to a huge key space a brut-force attack appears practically impossible.

Marcin Seredynski, Krzysztof Pienkosz, Pascal Bouvry
Ontology Based Cooperative Intrusion Detection System

As malicious intrusions span sites more frequently, network security plays the vital role in internet. Intrusion detection system(IDS) is expected to provide powerful protection against malicious behaviors. However, high false negative and false positive prevent intrusion detection system from practically using. After survey of present intrusion detection systems, we believe more accurate and efficient detection result can be obtained by using multi-sensor cooperative detection. To aiding cooperative detection, an ontology consisting of attribute nodes and value nodes is presented after analysis of IDSs rules and various classes of computer intrusions. On the basis of ontology, a matchmaking method is given to improve flexibility of detection. Cooperative detection framework based on the ontology is also discussed. The ontology proposed in paper has two advantages. First, it makes the detection more flexible and second it provides global locality information to support cooperation.

Yanxiang He, Wei Chen, Min Yang, Wenling Peng
Adding Security to Network Via Network Processors

With the increasing need of security, cryptographic processing be-comes a crucial issue for network devices. Traditionally security functions are implemented with Application Specific Integrated Circuit (ASIC) or General-Purposed Processors (GPPs). Network processors (NPs) are emerging as a programmable alternative to the conventional solutions to deliver high performance and flexibility at moderate cost. This work compares and analyzes architectural characteristics of many widespread cryptographic algorithms on Intel IXP2800 network processor. In addition, we investigate several implementation and optimization principles that can improve the cryptographic performance on NPs. Most of the results reported here should be applicable to other network processors since they have similar components and architectures.

Hao Yin, Zhangxi Tan, Chuang Lin, Guangxi Zhu
A Method to Obtain Signatures from Honeypots Data

Building intrusion detection model in an automatic and online way is worth discussing for timely detecting new attacks. This paper gives a scheme to automatically construct snort rules based on data captured by honeypots on line. Since traffic data to honeypots represent abnormal activities, activity patterns extracted from those data can be used as attack signatures. Packets captured by honeypots are unwelcome, but it appears unnecessary to translate each of them into a signature to use entire payload as activity pattern. In this paper, we present a way based on system specifications of honeypots. It can reflect seriousness level of captured packets. Relying on discussed system specifications, only critical packets are chosen to generate signatures and discriminating values are extracted from packet payload as activity patterns. After formalizing packet structure and syntax of snort rule, we design an algorithm to generate snort rules immediately once it meets critical packets.

Chi-Hung Chi, Ming Li, Dongxi Liu
A Framework for Adaptive Anomaly Detection Based on Support Vector Data Description

To improve the efficiency and usability of adaptive anomaly detection system, we propose a new framework based on Support Vector Data Description (SVDD) method. This framework includes two main techniques: online change detection and unsupervised anomaly detection. The first one enables automatically obtain model training data by measuring and distinguishing change caused by intensive attacks from normal behavior change and then filtering most intensive attacks. The second retrains model periodically and detects the forthcoming data. Results of experiments with the KDD’99 network data show that these techniques can handle intensive attacks effectively and adapt to the concept drift while still detecting attacks. As a result, false positive rate is reduced from 13.43% to 4.45%.

Min Yang, HuanGuo Zhang, JianMing Fu, Fei Yan
Design and Analysis of Improved GSM Authentication Protocol for Roaming Users

In this paper, we improve the GSM (Global System for Mobile Communications) authentication protocol to reduce the signaling loads on the network. The proposed protocol introduces a notion of the enhanced user profile containing a few of VLR IDs for the location areas where a mobile user is most likely to visit. We decrease the authentication costs for roaming users by exploiting the enhanced user profile. Our protocol is analyzed with regard to efficiency and is compared with the original protocol.

GeneBeck Hahn, Taekyoung Kwon, Sinkyu Kim, JooSeok Song
A Novel Intrusion Detection Method

It is an important issue for the security of network that how to detect new intrusions attack. This paper investigates unsupervised intrusion detection method. A distance definition for mixed attributes, a simple method calculating cluster radius threshold, a outlier factor measured deviating degree of a cluster, and a novel intrusion detection method are proposed in this paper. The experimental results show that the method has promising performance with high detection rate and low false alarm rate, also can detect new intrusion.

ShengYi Jiang, QingHua Li, Hui Wang

Session 8: Network Storage

An Implementation of Storage-Based Synchronous Remote Mirroring for SANs

Remote mirroring ensures that all data written to a primary storage device are also written to a remote secondary storage device to support disaster recoverability. In this study, we designed and implemented a storage-based synchronous remote mirroring for SAN-attached storage nodes. Taking advantage of the high bandwidth and long-distance linking ability of dedicated fiber connections, this approach provides a consistent and up-to-date copy in a remote location to meet the demand for disaster recovery. This system has no host or application overhead, and it is also independent of the actual storage unit. In addition, we present a disk failover solution. The performance results indicate that the bandwidth of the storage node with mirroring under a heavy load was 98.67% of the bandwidth without mirroring, which was only a slight performance loss. This means that our synchronous remote mirroring has little impact on the host’s average response time and the actual bandwidth of the storage node.

Ji-wu Shu, Rui Yan, Dongchan Wen, Weimin Zheng
A Network Bandwidth Computation Technique for IP Storage with QoS Guarantees

IP storage becomes more commonplace with the prevalence of the iSCSI (Internet SCSI) protocol that enables the SCSI protocol to run over the existing IP network. Meanwhile, storage QoS that assures a required storage service for each storage client has gained in importance with increased opportunities for multiple storage clients to share the same IP storage. Considering the existence of other competing network traffic in IP network, we have to provide storage I/O traffic with guaranteed network bandwidth. Most importantly, we need to calculate the required network bandwidth to assure a given storage QoS requirement between a storage client and IP storage. This paper proposes a network bandwidth computation technique that not only accounts for the overhead caused by the underlying network protocols, but also guarantees the minimum data transfer delay over the IP network. Performance evaluations with various I/O workload patterns on our IP storage testbed verify the correctness of the proposed technique; that is, allocating a part (0.6–20%) of the entire network bandwidth can assure the given storage QoS requirements.

Young Jin Nam, Junkil Ryu, Chanik Park, Jong Suk Ahn
Paramecium: Assembling Raw Nodes into Composite Cells

In conventional DHTs, each node is assigned an exclusive slice of identifier space. Simple it is, such arrangement may be rough. In this paper we propose a generic component structure: several independent nodes constitute a cell; a slice of identifier space is under nodes’ condominium; part of nodes in the same cell cooperatively and transparently shield the internal dynamism and structure of the cell from outsiders; this type of structure can be recursively repeated. Cells act like raw nodes in conventional DHTs and cell components can be used as bricks to construct any DHT-like systems. This approach provides encapsulation, scalable hierarchy, and enhanced security with bare incurred complexity.

Ming Chen, Guangwen Yang, Yongwei Wu, Xuezheng Liu
The Flexible Replication Method in an Object-Oriented Data Storage System

This paper introduces a replication method for object-oriented data storage that is highly flexible to fit different applications to improve availability. In view of semantics of different applications, this method defines three data-consistency criteria and then developers are able to select the most appropriate criteria for their programs through storage APIs. One criterion realizes a quasi-linearizability consistency, which will cause non-linearizability in a low probability but may not impair the semantic of applications. Another is a weaker one that can be used by many Internet services to provide more read throughput, and the third implements a stronger consistency to fulfill strict linearizability. In addition, they all accord with one single algorithm frame and are different from each other in a few details. Compared with conventional application-specific replication methods, this method has higher flexibility.

Youhui Zhang, Jinfeng Hu, Weimin Zheng
Enlarge Bandwidth of Multimedia Server with Network Attached Storage System

Network attached storage system is proposed to solve the bottleneck problem of the multimedia server. It adds a network channel to the RAID and data can be transferred between the Net-RAID and clients directly. The architecture avoids expensive store-and-forward data copying between the multimedia server and storage devices when clients download/upload data from/to the server. The system performance of the proposed architecture is evaluated through a prototype implementation with multiple network disk arrays. In multi-user environment, data transfer rate is measured 2 3 times higher than that with a traditional disk array, and service time is about 3 times shorter. Experimental results show that the architecture removes the server bottleneck and dynamically increases system bandwidth with the expansion of storage system capacity.

Dan Feng, Yuhui Deng, Ke Zhou, Fang Wang
The NDMP-Plus Prototype Design and Implementation for Network Based Data Management

Network based data management/backup/restore is the key component in the data storage centre. This paper proposes a new net-work based data management — NDMP-Plus. We firstly discuss the components of the NDMP-Plus architecture. Then, we detail two new techniques in NDMP-Plus — VSL (Virtual Storage Layer) and the negotiation mechanism. VSL is the core component to implement the flexibility, which could avoid the network communication with the storage media directly. And the negotiation mechanism is the key mechanism to improve the performance. Furthermore, we carry out an experiment to evaluate the performance of NDMP-Plus. The result of it suggests that NDMP-Plus has stronger flexibility and higher performance than the original NDMP.

Kai Ouyang, Jingli Zhou, Tao Xia, Shengsheng Yu

Session 9: Multimedia Service

Further Optimized Parallel Algorithm of Watershed Segmentation Based on Boundary Components Graph

Watershed segmentation/transform is a classical method for image segmentation in gray scale mathematical morphology. Nevertheless watershed algorithm has strong recursive nature, so straightforward parallel one has a very low efficiency. Firstly, the advantages and disadvantages of some existing parallel algorithms are analyzed. Then, a Further Optimized Parallel Watershed Algorithm (FOPWA) is presented based on boundary components graph. As the experiments show, FOPWA optimizes both running time and relative speedup, and has more flexibility.

Haifang Zhou, Xuejun Yang, Yu Tang, Nong Xiao
The Transmitted Strategy of Proxy Cache Based on Segmented Video

Using proxy cache is a key technique that may help to reduce the loads of the server, network bandwidth and startup delays. Basing on the popularity of clients’ request to segment video,we extend the length for batch and patch by using dynamic cache of proxy cache for streaming media. Present transmission schemes using dynamic cache such that unicast suffix batch, unicast patch, multicast patch, multicast merge and optimal batch patch by proxy cache based on segmented video. And then quantitatively explore the impact of the choice of transmission scheme, cache allocation policy, proxy cache size, and availability of unicast versus multicast capability, on the resultant transmission cost.

Zhiwen Xu, Xiaoxin Guo, Yunjie Pang, Zhengxuan Wang
The Strategy of Batch Using Dynamic Cache for Streaming Media

The batch is an important technique for delivering video over Internet or VoD. It is a key method to improve effect for video multicast. In this paper, we research the batch strategy of proxy cache for streaming media using dynamic cache, proposed the three kind of cache algorithm for proxy cache: window batch, size batch, efficient batch. These methods increased the length of batch, solved the problem for latency time of batch in video muticast, improved the byte-hit ratio of proxy cache for streaming media, and economized the resources of network backbone. Event-driven simulations are conducted to evaluate these kinds of strategy are better than prefix cache and segment cache.

Zhiwen Xu, Xiaoxin Guo, Yunjie Pang, Zhengxuan Wang
New Regions of Interest Image Coding Using Up-Down Bitplanes Shift for Network Applications

Regions Of Interest (ROI) image coding is one of the most significant features in JPEG2000 for network applications. In this paper, a new approach for ROI coding so-call Up-Down Bitplanes Shift (UDBShift) is presented. This new method separates all bitplanes into three parts: Important Significant Bitplanes (ISB), General Significant Bitplanes (GSB) and Least Sig-nificant Bitplanes (LSB). The certain number bitplanes of ROIs are up-shifted to ISB based on different degrees of interest of every ROI. Then, partial BG bitplanes are downshifted to LSB according to encoding requirement. Finally, The residual significant bitplanes of ROIs and BG that are saved in GSB are not shifted. Simulation results show significant improvement in reduction of reduction of transmission time and enhanced flexibility at the expense of a small complexity. Additionally, it can support arbitrarily shaped multiple ROI coding with different degrees of interest without coding the ROI shapes.

Li-bao Zhang, Ke Wang

Workshop1: Building Intelligent Sensor Networks

Bridging the Gap Between Micro and Nanotechnology: Using Lab-on-a-Chip to Enable Nanosensors for Genomics, Proteomics, and Diagnostic Screening

The growing need for accurate and fast methods of DNA and protein determination in the post human genome era has generated considerable interest in the development of new microfluidic analytical platforms, fabricated using methods adapted from the semi-conductor industry. These methods have resulted in the development of the Lab-on-a-Chip concept, a technology which often involves having a miniaturised biochip (as an analytical device), with rather larger instrumentation associated with the control of the associated sensors and of fluidics. This talk will explore the development of new Lab-on-a-Chip platforms for DNA, protein and cell screening, using microfluidics as a packaging technology in order to enable advances in nanoscale science to be implemented in a Lab-on-a-Chip format. The talk will also show how system on a chip methods can be integrated with Lab-on-a-Chip devices to create remote and distributed intelligent sensors, which can be used in a variety of diagnostic applications, including for example chemical sensing within the GI tract.

Jonathan M. Cooper, Erik A. Johannessen, David R. S. Cumming
The Development of Biosensors and Biochips in IECAS

In this paper, the thin film electrode disposable biosensors capable with low cost, high reliability, robustness, low volume sample and hand-held multichannel meter were developed. Various biomolecules, such as glucose, lactate, b-hydroxybutyrate, cholesterol, hemoglobin and creatine kinase in low volume (less than 3 μL) have been detected. It is significant for the applications in home health care, clinical diagnostics and physiological identification and physical performance of athlete. Biochips based on micro-electro-mechanical-systems (MEMS) technology supply novel biochemical analytical technologies, which offer many advantages including high sample throughput, high integration, and reduced cost. Biochips are rapidly developed in recent years. This paper also will show the research results of biochips based on MEMS technology, including DNA purification chips, DNA-PCR chips, capillary electrophoresis chips, PCR-CE chips, LAPS (light addressable potential sensor) for DNA detection, DNA SPR (surface plasmon resonance) and DNA FET (field effect transistor) sensors. These biochips have potential applications in health care diagnosis, environment monitoring, gene sequencing and high through drug screening.

Xinxia Cai, Dafu Cui
Open Issues on Intelligent Sensor Networks

In this paper, we address some open issues on intelligent sensor networks research. Recent advancement in wireless communications and electronics has enabled the development of low-cost sensor networks, which is one of the most important technologies for 21st century.

Yiqiang Chen, Wen Gao, Junfa Liu
Enabling Anytime Anywhere Wireless Sensor Networks

A self-configuring wireless sensor network (WSN) system will be presented. This ”smart-dust” system, deployed in more than 500 installations, and based on the UC Berkeley, open-source TinyOS embedded operating system, is the most widely used WSN worldwide. Applications and their requirements and characteristics will be presented, along with markets and the latest technology. Key technical issues with real-world deployments will be explored.

John Crawford
A Query-Aware Routing Algorithm in Sensor Networks

Wireless sensor networks have become increasingly popular due to the variety applications in both military and civilian fields. Routing algorithms are critical for enabling the successful operations of sensor networks. A number of routing algorithms have been proposed. However, all the routing algorithms are considered in isolation from the particular communication needs of the data management. This paper focuses on the design of the routing algorithms considering the needs of processing data query in sensor networks. A query-aware routing algorithm is proposed. The algorithm has the following advantages comparing with other routing algorithms. First, it processes as many queries as possible while routing. Second, the broadcast is executed locally so that the energy required by globe broadcasts is saved. Third, routing is executed by searching and generating a binary-tree and only two boundary nodes selected to broadcast message when broadcast is needed so that the number of broadcast is reduced dramatically and the cover range of local broadcast is increased. Finally, multiple routing paths for many routing requirements are found by merging routing requirements and through only one random walk in the sensor net-work. Experimental results show that the proposed algorithm has better performance and scalibility than other routing algorithms.

Jinbao Li, Jianzhong Li, Shengfei Shi
Sensors Network Optimization by a Novel Genetic Algorithm

This paper describes the optimization of a sensor network by a novel Genetic Algorithm (GA) that we call King Mutation C2. For a given distribution of sensors, the goal of the system is to determine the optimal combination of sensors that can detect and/or locate the objects. An optimal combination is the one that minimizes the power consumption of the entire sensor network and gives the best accuracy of location of desired objects. The system constructs a GA with the appropriate internal structure for the optimization problem at hand, and King Mutation C2 finds the quasi-optimal combination of sensors that can detect and/or locate the objects. The study is performed for the sensor network optimization problem with five objects to detect/track and the results obtained by a canonical GA and King Mutation C2 are compared.

Hui Wang, Anna L. Buczak, Hong Jin, Hongan Wang, Baosen Li
Online Mining in Sensor Networks

Online mining in large sensor networks just starts to attract interest. Finding patterns in such an environment is both compelling and challenging. The goal of this position paper is to understand the challenges and to identify the research problems in online mining for sensor networks. As an initial step, we identify the following three problems to work on: (1) sensor data irregularities detection; (2) sensor data clustering; and (3) sensory attribute correlations discovery. We also outline our preliminary proposal of solutions to these problems.

Xiuli Ma, Dongqing Yang, Shiwei Tang, Qiong Luo, Dehui Zhang, Shuangfeng Li
The HKUST Frog Pond – A Case Study of Sensory Data Analysis

Many sensor network applications are data-centric, and data analysis plays an important role in these applications. However, it is a challenging task to find out what specific problems and requirements sensory data analysis will face, because these applications are tightly embedded in the physical world and the sensory data reflect the physical phenomena being monitored. In this paper, we propose to use field studies as an alternative for identifying these problems and requirements. Specifically, we deployed an experimental sensor network for monitoring the frog pond in our university and analyzed the collected sensory data. We present our methodology of sensory data collection and analysis. We also discuss preliminary analytical results from the collected sensory data, together with our generalization for similar sensor network applications. We find that this case study helped us identify and understand several problems, either general or specific, in real-world sensor network application deployment and sensory data analysis.

Wenwei Xue, Bingsheng He, Hejun Wu, Qiong Luo
BLOSSOMS: A CAS/HKUST Joint Project to Build Lightweight Optimized Sensor Systems on a Massive Scale

A joint effort between the Chinese Academy of Sciences and the Hong Kong University of Science and Technology, the BLOSSOMS sensor network project aims to identify research issues at all levels from practical applications down to the design of sensor nodes. In this project, a heterogeneous sensor array including different types of application-dependent sensors as well as monitoring sensors and intruding sensors are being developed. Application-dependent power-aware communication protocols are also being studied for communications among sensor nodes. An ontology-based middleware is built to relieve the burden of application developers from collecting, classifying and processing messy sensing contexts. This project will also develop a set of tools allowing researchers to model, simulate/emulate, analyze, and monitor various functions of sensor networks.

Wen Gao, Lionel M. Ni, Zhiwei Xu
A Pervasive Sensor Node Architecture

A set of sensor nodes is the basic component of a sensor network. Many researchers are currently engaged in developing pervasive sensor nodes due to the great promise and potential with applications shown by various wireless remote sensor networks. This short paper describes the concept of sensor node architecture and current research activities on sensor node development at ICTCAS.

Li Cui, Fei Wang, Haiyong Luo, Hailing Ju, and Tianpu Li
Cabot: On the Ontology for the Middleware Support of Context-Aware Pervasive Applications

Middleware support is a major topic in pervasive computing. Existing studies mainly address the issues in the organization of and the collaboration amongst devices and services, but pay little attention to the design support of context-aware pervasive applications. Most of these applications are required to be adaptable to dynamic environments and self-managed. However, most context-aware pervasive applications nowadays have to carry out tedious tasks of gathering, classifying and processing messy context information due to lack of the necessary middleware support. To address this problem, we propose a novel approach based on ontology technology, and apply it in our Cabot project. Our approach defines a context ontology catered for the pervasive computing environment. The ontology acts as the context information agreement amongst all computing components to support applications with flexible context gathering and classifying capabilities. This allows a domain ontology data-base to be constructed for storing the semantics relationship of concepts used in the pervasive computing environment. The ontology database supports applications with rich context processing capabilities. With the aid of ontology technology, Cabot further helps alleviate the impact of the naming problem, and support advanced user space switching. A case study is given to show how Cabot assists developers in designing context-aware pervasive applications.

Chang Xu, S. C. Cheung, Cindy Lo, K. C. Leung, Jun Wei
Accurate Emulation of Wireless Sensor Networks

Wireless sensor networks (WSNs) have a wide range of useful, data-centric applications, and major techniques involved in these applications include in-network query processing and query-informed routing. Both techniques require realistic environments and detailed system feedback for development and evaluation. Unfortunately, neither real sensor networks nor existing simulators/emulators are suitable for this requirement. In this design paper, we propose a distributed sensor network emulator, a Virtual Mote Network (VMNet), to meet this requirement. We describe the system architecture, the synchronization of the nodes and the virtual time emulation with a focus on mechanisms that are effective for accurate emulation.

Hejun Wu, Qiong Luo, Pei Zheng, Bingsheng He, Lionel M. Ni
LEAPS: A Location Estimation and Action Prediction System in a Wireless LAN Environment

Location estimation and user behavior recognition are research issues that go hand in hand. In the past, these two issues have been investigated separately. In this paper, we present an integrated framework called LEAPS (location estimation and action prediction), jointly developed by Hong Kong University of Science and Technology, and the Institute of Computing, Shanghai, of the Chinese Academy of Sciences that combines two areas of interest, namely, location estimation and plan recognition, in a coherent whole. Under this framework, we have been carrying out several investigations, including action and plan recognition from low-level signals and location estimation by intelligently selecting access points (AP). Our two-layered model, including a sensor-level model and an action and goal prediction model, allows for future extensions in more advanced features and services.

Qiang Yang, Yiqiang Chen, Jie Yin, Xiaoyong Chai
Reliable Splitted Multipath Routing for Wireless Sensor Networks

In wireless sensor networks (WSN) the reliability of the system can be increased by providing several paths from the source node to the destination node and by sending the same packet through each of them (the algorithm is known as multipath routing). Using this technique, the traffic increases significantly. In this paper, we analyze the combination between a new multipath routing mechanism and a data-splitting scheme that results in an efficient solution for achieving high delivery ratios while keeping the traffic at a low value. Simulation results are presented in order to characterize the performances of the algorithm.

Jian Wu, Stefan Dulman, Paul Havinga
Reliable Data Aggregation for Real-Time Queries in Wireless Sensor Systems

In this paper, we study the reliability issue in aggregating data versions for execution of real-time queries in a wireless sensor network in which sensor nodes are distributed to monitor the events occurred in the environment. We extend the Parallel Data Shipping with Priority Transmission (PAST) scheme to be workload sensitive (the new algorithm is called PAST with Workload Sensitivity (PAST-WS)) in selecting the coordinator node and the paths for transmitting the data from the participating nodes to the coordinator node. PAST-WS considers the workload at each relay node to minimize the total cost and delay in data transmission. PAST-WS not only reduces the data aggregation cost significantly, but also distributes the aggregation workload more evenly among the nodes in the system. Both properties are very important for extending the lifetime of sensor networks since the energy consumption rate of the nodes highly depends on the data transmission workloads.

Kam-Yiu Lam, Henry C. W. Pang, Sang H. Son, BiYu Liang

Workshop2: Multimedia Modeling and the Security in the Next Generation Network Information Systems

The Design of a DRM System Using PKI and a Licensing Agent

As the logistic environment of digital contents is rapidly changing, the protection of digital rights for digital content has been recognized as a very critical issue that needs to be dealt with effectively. Digital Rights Management (DRM) has taken much interest in Internet Service Providers (ISPs), authors and publishers of digital content in order to create a trusted environment for access and use of digital resources. In this paper, PKI (Public Key Infrastructure) and a licensing agent are used in order to prevent illegal use of digital contents by unauthorized users. In addition, a DRM system is proposed and designed which performs proprietary encryption and real-time decoding using the I-frame-under-container method to protect copyright of video data.

KeunWang Lee, JaePyo Park, KwangHyoung Lee, JongHee Lee, HeeSook Kim
Multimedia Synchronization for Handoff Control with MPEG in All-IP Mobile Networks

This paper proposes a handoff management scheme for the synchronization algorithm of an all-IP based multimedia system. The synchronization algorithm and handoff management method are proposed to realize smooth play-out of a multimedia stream with minimum loss under handoff conditions which normally occur due to the movement of mobile hosts. Handoffs, which frequently occur under mobile environments, result in loss of multimedia streams stored in base stations due to the change of base station. As a result, the multimedia stream shows a low QoS due to disruption of the stream at play-out. The proposed scheme shows that it not only provides a stream of continuous play-out but also shows a higher packet play-out rate and lower loss rate than previous methods.

Gi-Sung Lee, Hong-jin Kim, Il-Sun Hwang
Fuzzy Logic Adaptive Mobile Location Estimation

In this study, we propose a novel mobile tracking scheme which utilizes the fuzzy-based decision making with the consideration of the information such as previous location, moving direction and distance to the base station as well as received signal strength, thereby resulting to the estimation performance even much better than the previous schemes. Our scheme divides a cell into many blocks based on the signal strength and then estimate in stepwise the optimal block where a mobile locates using Multi-Criteria Decision Making (MCDM). Through numerical results, we show that our proposed mobile tracking method provides a better performance than the conventional method using the received signal strength.

Jongchan Lee, Seung-Jae Yoo, Dong Chun Lee
A Host Protection Framework Against Unauthorized Access for Ensuring Network Survivability

Currently, the major focus on the network security is securing individual components as well as preventing unauthorized access to network services. Ironically, Address Resolution Protocol (ARP) poisoning and spoofing techniques can be used to prohibit unauthorized network access and resource modifications. The protecting ARP which relies on hosts caching reply messages can be the primary method in obstructing the misuse of the network. This paper proposes a network service access control framework, which provides a comprehensive, host-by-host perspective on IP (Internet Protocol) over Ethernet networks security. We will also show how this framework can be applied to network elements including detecting, correcting, and preventing security vulnerabilities.

Hyuncheol Kim, Sunghae Kim, Seongjin Ahn, Jinwook Chung
A Design and Implementation of Network Traffic Monitoring System for PC-room Management

This study proposes a network traffic monitoring system that will support the operation, management, expansion and design of a network system for its users through an analysis and diagnosis of the network-related equipments and lines in the PC-room. The proposed monitoring system is lightweight for its uses under the wireless environment and applies a web-based technology using JAVA to overcome the limits to managerial space, inconveniences and platform dependency and to raise its applicability to and usability on the real network based on performances, fault analyses and their calculation algorithm. The traffic monitoring system implemented in this study will allow users to effectively fulfill the network management including network diagnoses, fault detection, network configuration and design through the web, as well as to help users with their managements by presenting how to apply a simple network.

Yonghak Ahn, Oksam Chae
A Vulnerability Assessment Tool Based on OVAL in Linux System

Open Vulnerability Assessment Language (OVAL) is a standard language which is used to detect the vulnerability of local system based on the system characteristics and configurations. It is suggested by MITRE. OVAL consists of XML schema and SQL query statements. XML schema defines the vulnarable points and SQL query detects the vulnerable and weak points. This pa-per designed and implemented the vulnerability assesment tool with OVAL to detect the weak points in Linux System. It has more readability, reliability, scalability and simplicity than traditional tools.

Youngmi Kwon, Hui Jae Lee, Geuk Lee
Performance Analysis of Delay Estimation Models for Signalized Intersection Networks

The primary purpose of this study is to examine the models’ performance for estimating average delay experienced by the passing vehicles at signalized intersection network, and to improve the models’ performance for Intelligent Transportation Systems (ITS) application in terms of actuated signal operation. Two major problems affected the models’ performance have been defined by the empirical analyses in this paper. The first problem is related to the time period of delay estimation. The second problem is associated with the fact that the observed arrival flow patterns are so different from those applied for developing the existing models. This paper presents several methods to overcome the problems for estimating the delay by using the existing models.

Hyung Jin Kim, Bongsoo Son, Soobeom Lee
Network Intrusion Protection System Using Rule-Based DB and RBAC Policy

Role-Based Access Control (RBAC) is a method to manage and control a host in a distributed manner by applying the rules to the users on a host. This paper proposes a rule based intrusion protection system based on RBAC. It admits or rejects users to access the network resources by applying the rules to the users on a network. Proposed network intrusion protection system has been designed and implemented to have menu-based interface, so it is very convenient to users.

Min Wook Kil, Si Jung Kim, Youngmi Kwon, Geuk Lee
Effective FM Bandwidth Estimate Scheme with the DARC in Broadcasting Networks

In this paper we propose the effect on the RF bandwidth when the DARC data signal is added to the ordinary FM broadcasting signal. Generally, the bandwidth of the commercial FM broadcasting signal is strictly restricted to 200KHz. Hence, even though the DARC data signal is added at the ordinary stereophonic FM signal, the required bandwidth should not exceed 200KHz. The simulation results show that even in the worst case, the required bandwidth is about 184 KHz, and the rest of 16 KHz bandwidth could be used for other FM data broadcasting services.

Sang Woon Lee, Kyoo Jin Han, Keum Chan Whang
Enhanced Algorithm of TCP Performance on Handover in Wireless Internet Networks

In the TCP over Wireless Internet Networks (WINs), TCP responds to losses such as high bit errors and handovers by invoking congestion control and avoidance algorithms. In this paper we propose new handover notification algorithm that is to send an explicit handover notification message to the source host from mobile host when occurring to handover. Upon receipt of explicit handover notification, the source host enters persist mode. This way, data transmissions at the source host during handover are frozen. In numerical result, proposed algorithm provides a little performance improvement compared with general TCP, and expects to greater performance improvements while having frequent handover in WINs.

Dong Chun Lee, Hong-Jin Kim, Jae Young Koh
Backmatter
Metadaten
Titel
Network and Parallel Computing
herausgegeben von
Hai Jin
Guang R. Gao
Zhiwei Xu
Hao Chen
Copyright-Jahr
2004
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-30141-7
Print ISBN
978-3-540-23388-6
DOI
https://doi.org/10.1007/b100357