Skip to main content

2007 | Buch

On the Move to Meaningful Internet Systems 2007: CoopIS, DOA, ODBASE, GADA, and IS

OTM Confederated International Conferences CoopIS, DOA, ODBASE, GADA, and IS 2007, Vilamoura, Portugal, November 25-30, 2007, Proceedings, Part II

herausgegeben von: Robert Meersman, Zahir Tari

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

GADA 2007 International Conference (Grid Computing, High-Performance and Distributed Applications)

Frontmatter

Keynote

Service Architectures for e-Science Grid Gateways: Opportunities and Challenges

An e-Science Grid Gateway is a portal that allows a scientific collaboration to use the resources of a Grid in a way that frees them from the complex details of Grid software and middleware. The goal of such a gateway is to allow the users access to community data and applications that can be used in the language of their science. Each user has a private data and metadata space, access to data provenance and tools to use or compose experimental workflows that combine standard data analysis, simulation and post-processing tools. In this talk we will describe the underlying Grid service architecture for such an eScience gateway. In this paper we will describe some of the challenges that confront the design of Grid Gateways and we will outline a few new research directions.

Dennis Gannon, Beth Plale, Daniel A. Reed

Data and Storage

Access Control Management in Open Distributed Virtual Repositories and the Grid

The management of access control (AC) policies in open distributed systems (ODS), like the Grid, P2P systems, or Virtual Repositories (databases or data grids) can take two extreme approaches. The first extreme approach is a centralized management of the policy (that still allows a distribution of AC policy enforcement). This approach requires a full trust in a central entity that manages the AC policy. The second extreme approach is fully distributed: every ODS participant manages his own AC policy. This approach can limit the functionality of an ODS, making it difficult to provide synergetic functions that could be designed in a way that would not violate AC policies of autonomous participants. This paper presents a method of AC policy management that allows a partially trusted central entity to maintain global AC policies, and individual participants to maintain own AC policies. The proposed method resolves conflicts of the global and individual AC policies. The proposed management method has been implemented in an access control system for a Virtual Policy that is used in two European 6

th

FP projects: eGov-Bus and VIDE. The impact of this access control system on performance has been evaluated and it has been found that the proposed AC method can be used in practice.

Adam Wierzbicki, Łukasz Żaczek, Radosław Adamus, Edgar Głowacki
Transforming the Adaptive Irregular Out-of-Core Applications for Hiding Communication and Disk I/O

In adaptive irregular out-of-core applications, communications and mass disk I/O operations occupy a large portion of the overall execution. This paper presents a program transformation scheme to enable overlap of communication, computation and disk I/O in this kind of applications. We take programs in inspector-executor model as starting point, and transform them to a pipeline fashion. By decomposing the inspector phase and reordering iterations, more overlap opportunities are efficiently utilized. In the experiments, our techniques are applied to two important applications i.e. Partial differential equation solver and Molecular dynamics problems. For these applications, versions employing our techniques are almost 30% faster than inspector-executor versions.

Changjun Hu, Guangli Yao, Jue Wang, Jianjiang Li
Adaptive Data Block Placement Based on Deterministic Zones (AdaptiveZ)

The deterministic block distribution method proposed for RAID systems (known as striping) has been a traditional solution for achieving high performance, increased capacity and redundancy all the while allowing the system to be managed as if it were a single device. However, this distribution method requires one to completely change the data layout when adding new storage subsystems, which is a drawback for current applications.

This paper presents AdaptiveZ, an adaptive block placement method based on deterministic zones, which grows dynamically zone-by-zone according to capacity demands. When adapting new storage subsystems, it changes only a fraction of the data layout while preserving a simple management of data due to deterministic placement. AdaptiveZ uses both a mechanism focused on reducing the overhead suffered during the upgrade as well as a heterogeneous data layout for taking advantage of disks with higher capabilities. The evaluation reveals that AdaptiveZ only needs to move a fraction of data blocks to adapt new storage subsystems while delivering an improved performance and a balanced load. The migration scheme used by this approach produces a low overhead within an acceptable time. Finally, it keeps the complexity of the data management at an acceptable level.

J. L. Gonzalez, Toni Cortes
Keyword Based Indexing and Searching over Storage Resource Broker

A keyword based metadata indexing and searching facility for Storage Resource Broker (SRB) is presented here. SRB is a popular data grid based storage system that provides means to store data and associate metadata information with the stored data. The metadata storage system in SRB is modeled on the attribute-value pair representation. This data structure enables SRB to be used as a general purpose data management platform for a variety of application domains. However, the generic representation of metadata storage mechanism also proves to be a limitation for applications that depend on the extensive use of the associated metadata in order to provide customized search and query operations. The presented work addresses this limitation by providing a keyword based indexing system over the metadata stored in the SRB system. The system is tightly coupled with the SRB metadata catalog; thereby ensuring that the keyword indexes are always kept updated to reflect changes in the host SRB system.

Adnan Abid, Asif Jan, Laurent Francioli, Konstantinos Sfyrakis, Felix Schuermann

Networks

eCube: Hypercube Event for Efficient Filtering in Content-Based Routing

Future network environments will be pervasive and distributed over a multitude of devices that are dynamically networked. The data collected by pervasive devices (e.g. traffic data,

CO

2

values) provide important information for applications that use such contexts actively. Future applications of this type will form a grid over the Internet to offer various services and such a grid requires more selective and precise data dissemination mechanisms based on the content of data. Thus, a smart data/event structure is important. This paper introduces a novel event representation structure, called

eCube

, for efficient indexing, filtering and matching events. We show experimental results that demonstrate the powerful multidimensional structure and applicability of

eCube

over an event broker grid formed in peer-to-peer networks.

Eiko Yoneki, Jean Bacon
Combining Incomparable Public Session Keys and Certificateless Public Key Cryptography for Securing the Communication Between Grid Participants

Securing the communication between participants in Grid computing environments is an important task, because the participants do not know if the exchanged information has been modified, intercepted or coming/going from/to the right target. In this paper, a hybrid approach based on a combination of incomparable public session keys and certificateless public key cryptography for dealing with different threats to the information flow is presented. The properties of the proposed approach in the presence of various threats are discussed.

Elvis Papalilo, Bernd Freisleben

Collaborative Grid Environment and Scientific Grid Applications (Short Papers)

A Service-Oriented Platform for the Enhancement and Effectiveness of the Collaborative Learning Process in Distributed Environments

Modern on-line collaborative learning environments are to enable and scale the involvement of an increasing large number of single/group participants who can geographically be distributed, and who need to transparently share a huge variety of both software and hardware distributed learning resources. As a result, collaborative learning applications are to overcome important non-functional requirements arisen in distributed environments, such as scalability, flexibility, availability, interoperability, and integration of different, heterogeneous, and legacy collaborative learning systems. In this paper, we present a generic platform, called Collaborative Learning Purpose Library, which is based on flexible fine-grained Web-services for the systematical construction of collaborative learning applications that need to meet demanding non-functional requirements. The ultimate aim of this platform is to enhance and improve the on-line collaborative learning experience and outcome in highly distributed environments.

Santi Caballé, Fatos Xhafa, Thanasis Daradoumis
Social Networking to Support Collaboration in Computational Grids

Grids are complex systems that aggregate large amounts of distributed computational resources to perform large scale simulations and analysis by multiple research groups. In this paper we unveil its social networks: actors that participate in a Grid and relationships among those Grid actors. Social networking information can be used as a means to increase awareness and to facilitate collaboration among Grid participants. In practice we have implemented a social networking tool so that Grid actors can discover partners to collaborate, potential providers and consumers, and their referrals path. We present and discuss the evaluation of such tool in a user study performed with Grid resource consumers and providers.

Oscar Ardaiz, Isaac Chao, Ramón Sangüesa
A Policy Based Approach to Managing Shared Data in Dynamic Collaborations

This paper presents a policy-based framework for managing shared data among distributed participants in a dynamic collaboration. First, we identify three different types of entities, namely resources, participants and their relations, and the set of policies applicable to them. We then propose an integrated framework to provide a solution for managing shared data in dynamic collaborations. We discuss the implementation of the framework in the context of our storage service provisioning architecture and present the cost of such framework in comparison to the storage cost.

Surya Nepal, John Zic, Julian Jang
Grid Service Composition in BPEL for Scientific Applications

Grid computing aims to create an accessible virtual supercomputer by integrating distributed computers to form a parallel infrastructure for processing applications. To enable service-oriented Grid computing, the Grid computing architecture was aligned with the current Web service technologies; thereby, making it possible for Grid applications to be exposed as Web services. The WSRF set of specifications standardized the association of state information with Web services (WS-Resource) while providing interfaces for the management of state data. The Business Process Execution Language (BPEL) is the leading standard for integrating Web services and as such has a natural affinity to the integration of Grid services. In this paper, we share our experience on using BPEL to integrate, create, and manage WS-Resources that implement the factory pattern. To the best of our knowledge, this work is among the handful approaches that successfully use BPEL for orchestrating WSRF-based services and the only one that includes the discovery and management of instances.

Onyeka Ezenwoye, S. Masoud Sadjadi, Ariel Cary, Michael Robinson

Scheduling

Efficient Management of Grid Resources Using a Bi-level Decision-Making Architecture for “Processable” Bulk Data

The problem of efficient assignment of resources to perform a given bag-of-tasks in a distributed computing environment has been extensively studied by research communities. To develop an efficient resource assignment mechanism, this paper focuses on a particular type of resource intensive tasks and presents a bi-level decision-making architecture in a grid computing environment. In the proposed architecture, the higher decision-making module has the responsibility to select a partition of resources for each of the tasks. The lower decision-making module uses Integer Linear Programming based algorithm to actually assign resources from this selected partition to a particular task from the given set of tasks. This paper analyzes the performance of the proposed architecture at various workload conditions. This architecture can be extended for other types of tasks using the concepts presented.

Imran Ahmad, Shikharesh Majumdar
Towards an Open Grid Marketplace Framework for Resources Trade

A challenge of Grid computing is to provide automated support for the creation and exploitation of virtual organisations (VOs), involving individuals and different autonomous organizations, to which resources are pooled from potentially diverse origins. In the context of the presented work, virtual organizations trade grid resources and services according to economic models in electronic marketplaces. Thus in this paper we propose GRIMP (Grid Marketplace), a generic framework that provides services to support spontaneous creation of grid resources markets on demand. We motivate the need for such framework, present our design approach as well as the implementation and execution models.

Nejla Amara-Hachmi, Xavier Vilajosana, Ruby Krishnaswamy, Leandro Navarro, Joan Manuel Marques
A Hybrid Algorithm for Scheduling Workflow Applications in Grid Environments (ICPDP)

In this paper, based on a thorough analysis of different policies for DAG scheduling, an improved algorithm ICPDP (Improved Critical Path using Descendant Prediction) is introduced. The algorithm performs well with respect to the total scheduling time, the schedule length and load balancing. In addition, it provides efficient resource utilization, by minimizing the idle time on the processing elements. The algorithm has a quadratic polynomial time complexity. Experimental results are provided to support the performance evaluation of the algorithm and compare them with those obtained for other scheduling strategies. The ICPDP algorithm, as well as other analyzed algorithms, have been integrated in the DIOGENES project, and have been tested by using MonAlisa farms and ApMon, a MonAlisa extension.

Bogdan Simion, Catalin Leordeanu, Florin Pop, Valentin Cristea
Contention-Free Communication Scheduling for Group Communication in Data Parallelism

Group communication significantly influences the performance of data parallel applications. It is required often in two situations: one is array redistribution from phase to phase; the other is array remapping after loop partition. Nevertheless, the important factor that influences the efficiency of group communication is often neglected: a larger communication idle time may occur when there is node contention and difference among message lengths during one particular communication step. This paper is devoted to develop an efficient scheduling strategy using the compiling information provided by array subscripts, array distribution pattern and array access period. Our strategy not only avoids inter-processor contention, but it also minimizes real communication cost in each communication step. Our experimental results show that our strategy has better performance than the traditional implement of MPI_Alltoallv, alltoall based scheduling, and greedy scheduling.

Jue Wang, Changjun Hu, Jianjiang Li
SNMP-Based Monitoring Agents and Heuristic Scheduling for Large-Scale Grids

This paper presents both, SNMP-based resource monitoring and heuristic resource scheduling systems targeted to manage large-scale Grids. This approach involves two phases: resource monitoring and resource scheduling. Resource monitoring (even discovery) phase is supported by the SNMP-based Balanced Load Monitoring Agents for Resource Scheduling (SBLOMARS). This resource monitoring and discovery approach is different from current distributed monitoring systems in three main areas. Firstly, it reaches a high level of generality by the integration of SNMP technology and thus, it is offering an alternative solution to handle heterogeneous operating platforms. Secondly, it solves the flexibility problem by the implementation of complex dynamic software structures, which are used to monitor from simple personal computers to robust multi-processor systems or clusters with even multiple hard disks and storage partitions. Finally, the scalability problem is covered by the distribution of the monitoring system into a set of sub-monitoring instances which are specific per each kind of computational resource to monitor (processor, memory, software, network and storage). Resource scheduling phase is supported by the Balanced Load Multi-Constrain Resource Scheduler (BLOMERS). This resource scheduler is implemented based on a Genetic Algorithm, as an alternative to solve the inherent NP-hard problem for resource scheduling in large-scale Grids. We show some graphical and textual snapshots of resource availability reports as well as a scheduling scenario in the Grid5000 platform. We have obtained a scalable scheduler with an extraordinary load balanced between all nodes participating in the Grid.

Edgar Magaña, Laurent Lefevre, Masum Hasan, Joan Serrat
HARC: The Highly-Available Resource Co-allocator

HARC—the Highly-Available Resource Co-allocator—is an open-source system for reserving multiple resources in a coordinated fashion. HARC can handle different types of resource, and has been used to reserve time on supercomputers across a US-wide testbed, together with dedicated lightpaths connecting the machines. At HARC’s core are a distributed set of processes called

Acceptors

, which provide a co-allocation service. HARC functions normally provided a majority of the Acceptors are working; this replication gives HARC its high availability. The Paxos Commit protocol ensures that consistency across all Acceptors is maintained. This paper gives an overview of HARC, and explains both how it works and how it is used. We show that HARC’s design makes it easy for the community to contribute new components for co-allocating different types of resource, while the stability of the overall system is maintained.

Jon MacLaren

Middleware

Assessing a Distributed Market Infrastructure for Economics-Based Service Selection

Service selection is an important issue for market-oriented Grid infrastructures. However, few results have been published on the use and evaluation of market models in deployed prototypes, making it difficult to assess their capabilities. In this paper we study the integration of an extended version of Zero Intelligence Plus (ZIP) agents in a middleware for economics-based selection of Grid services. The advantages of these agents compared to alternatives is their fairly simple messaging protocol and negotiation strategy. By deploying the middleware on several machines and running experiments we observed that services are proportionally assigned to competing traders as should be in a fair market. Furthermore, varying the environmental conditions we show that the agents are able to respond to the varying environmental constraints by adapting their market prices.

René Brunner, Isaac Chao, Pablo Chacin, Felix Freitag, Leandro Navarro, Oscar Ardaiz, Liviu Joita, Omer F. Rana
Grid Problem Solving Environment for Stereology Based Modeling

The paper is concerned with the task of building problem solving environment (PSE) for stereology-based modeling applications. Such application involves tools for model creation, stereology-based model verification and model visualization. The application domain has complex and demanding technological requirements, including computationally intensive processing, operating platform heterogeneity and support for scientific collaboration. The natural solution is to take advantage of existing grid infrastructure to tap the computational resources required by the application domain. As the existing scientific grid production infrastructures do not satisfy all the requirements, we had to undertake the challenge of integrating multiple middleware solutions to enable their interoperability required by the PSE. Our results showcase the maturity of available grid solutions, as they can be adapted to support complex and platform dependent tasks.

Július Parulek, Marek Ciglan, Branislav Šimo, Miloš Šrámek, Ladislav Hluchý, Ivan Zahradník
Managing Dynamic Virtual Organizations to Get Effective Cooperation in Collaborative Grid Environments

This paper presents how to manage Virtual Organizations to enable efficient collaboration and/or cooperation as a result of a flexible and parametrical model. The CAM (Collaborative/Cooperative Awareness Management) model promotes collaboration around resources-sharing infrastructures, endorsing interaction by means of a set of rules. This model focuses on responding to specific demanding circumstances at a given moment, while optimizes resources communication and behavioural agility to get a common goal: the establishment of collaborative dynamic virtual organizations. This paper also describes how CAM works in some specific examples and scenarios, and how the CAM Rules-Based Management Application (based on Web Services and named WS-CAM) has been designed and validated to encourage resources to be involved in collaborative performances, tackling efficiently demanding situations without hindering the own purposes of each of these resources.

Pilar Herrero, José Luis Bosque, Manuel Salvadores, María S. Pérez

Data Analysis

Sidera: A Cluster-Based Server for Online Analytical Processing

Online Analytical Processing (OLAP) has become a primary component of today’s pervasive Decision Support systems. The rich multi-dimensional analysis that OLAP provides allows corporate decision makers to more fully assess and evaluate organizational progress than ever before. However, as the data repositories upon which OLAP is based become larger and larger, single CPU OLAP servers are often stretched to, or even beyond, their limits. In this paper, we present a comprehensive architectural model for a fully parallelized OLAP server. Our multi-node platform actually consists of a series of largely independent sibling servers that are “glued” together with a lightweight MPI-based Parallel Service Interface (PSI). Physically, we target the commodity-oriented, “shared nothing” Linux cluster, a model that provides an extremely cost effective alterative to the “shared everything” commercial platforms often used in high-end database environments. Experimental results demonstrate both the viability and robustness of the design.

Todd Eavis, George Dimitrov, Ivan Dimitrov, David Cueva, Alex Lopez, Ahmad Taleb
Parallel Implementation of a Neural Net Training Application in a Heterogeneous Grid Environment

The emergence of Grid technology provides an unrivalled opportunity for large-scale high performance computing applications, in several scientific communities, for instance high-energy physics, astrophysics, meteorology, computational medicine. One of the high-energy applications, suitable for execution in a Grid environment due to its high requirements in data processing, is the implementation of an artificial neural net for searching for the Higgs’s boson. Therefore, the aim of this work is to parallelize and evaluate the performance and the scalability of the kernel of a training algorithm of a multilayer perceptron artificial neural net for analysing data from the Large Electron Positron Collider at CERN. To carry out the training of the net there are a wide variety of iterative methods to converge towards the optimum values of the weights of the net. In our case the hybrid linear-BFGS method is used, which is based on the criteria of gradient descent. As for the training of the net, a first parallel implementation based on master-slave architecture was developed. In this scenario the slave nodes process the patterns and give an output with which the error value is calculated. On the other hand, the node acting as master collects the partial results, sums them and with this information, generates a linear equation system, which it then solves giving rise to the new weights that are distributed among the slaves for the next iteration. This first parallelization does not confer great scalability and provokes a bottleneck when it increases the size of the neural net since the master process saturates when trying to solve this large system of equations. For this reason a second parallelization is needed, where the slave nodes resolve the system of equations in a distributed way, avoiding the above bottleneck. This solution has been developed and will be evaluated in this work. This work has been developed utilising the MPI message passing library in its MPICH-G2 distribution in a heterogeneous Grid environment. In performance evaluation, the aim is to check if the parallel algorithm is suitable and scalable when executed in a heterogeneous Grid environment. The results obtained in different Grid environments are also compared with the result obtained in a shared-memory supercomputer.

Rafael Menéndez de Llano, José Luis Bosque

Scheduling and Management (Short Papers)

Generalized Load Sharing for Distributed Operating Systems

In this paper we propose a method for job migration policies by considering effective usage of global memory in addition to CPU load sharing in distributed systems. The objective of this paper is to reduce the number of page faults caused by unbalanced memory allocations for jobs among distributed nodes, which improves the overall performance of a distributed system. The proposed method, which uses the high performance and high throughput approach with remote execution strategy performs the best for both CPU-bound and memory-bound jobs in homogeneous as well as in the heterogeneous networks in a distributed system.

A. Satheesh, S. Bama
An Application-Level Service Control Mechanism for QoS-Based Grid Scheduling

In market-based service-oriented grids, scheduling service execution should account both for user- and provider-dependent Quality-of-Service (QoS) requirements. In this scenario we propose a mechanism to allow for flexible provision of grid services, i.e. to allow providers to dynamically adapt the execution of services according to both the changing conditions of the environment where they operate in, and the requirements of service users. The mechanism is based on handling program

continuations

for providing application-level primitives to control suspension and resuming of service execution at run-time. These primitives can also be accessed by consumer programs as web services. This approach makes the proposed control mechanism a basic programming layer to build a flexible and easily programmable middleware to experiment with different scheduling policies in service-oriented scenarios.

Claudia Di Napoli, Maurizio Giordano
Fine Grained Access Control with Trust and Reputation Management for Globus

We propose an integrated architecture, extending a framework for fine grained access control of Grid computational services, with an inference engine managing reputation and trust management credentials. Also, we present the implementation of the proposed architecture, with preliminary performance figures.

M. Colombo, F. Martinelli, P. Mori, M. Petrocchi, A. Vaccarelli
Vega: A Service-Oriented Grid Workflow Management System

Because of the nature of the Grid, Grid application systems built on traditional software development techniques can only interoperate with Grid services in an ad hoc manner that requires substantial human intervention. In this paper, we introduce Vega, a pure service-oriented Grid workflow system which consists of a set of loosely coupled services co-operating each other to solve problems. In Vega, the execution flow of its services is isolated from their interactions and these interactions are explicitly modelled and can be dynamically interpreted at run-time.

R. Tolosana-Calasanz, J. A. Bañares, P. Álvarez, J. Ezpeleta
GADA 2007 PC Co-chairs’ Message

This volume contains the papers presented at GADA 2007, the International Symposium on Grid Computing, High-Performance and Distributed Applications. The purpose of the GADA series of conferences, held in the framework of the OnTheMove Federated Conferences (OTM), is to bring together researchers, developers, professionals and students in order to advance research and development in the areas of grid computing and distributed systems and applications. This year’s conference was in Vilamoura, Algarve, Portugal, during November 29–30.

Pilar Herrero, Daniel Katz, María S. Pérez, Domenico Talia

Information Security (IS) 2007 International Symposium

Frontmatter

Keynote

Cryptography: Past, Present and Future

For most of the era of electronic communication, encryption the technique of protecting communications by scrambling them was largely a government preserve. Before modern electronics, encryption was too expensive for widespread business use. Most development was secret, carried out by the government, and reserved for government use. Cryptography was treated as a weapon under the export-control laws. Encryption systems could not be exported for commercial purposes, even to close allies and trading partners.

During the 1980s and 1990s, cryptography emerged from its former obscurity and became an important aspect of commercial communications. The rise of the personal computer and the Internet changed encryption from an exotic military-only technology to one critical for Internet commerce. Despite this, governments, especially that of the U..S., were slow to accept the new reality. Industry efforts to develop and use cryptography were thwarted by export-control regulations, which emerged as the dominant government influence on the development and deployment of encryption technology. By the late 1990s, the U.S. government, which had made repeated attempts to continue its domination of the field, held a stance that was barely tenable in the rest of the world. Influences varying from the rise of open-source software to European indignation at evidence the U.S. was spying on their communications came together to force a change.

The new regulations distinguish government customers from commercial onesand retail from customized technology. As a result, cryptography can nowbe exported with minimal government interference for most commercial and many government applications, to all countries except those regarded as supporters of terrorism.

Whitefield Diffie

Access Control and Authentication

E-Passport: Cracking Basic Access Control Keys

Since the introduction of the Machine Readable Travel Document (MRTD) that is also known as e-passport for human identification at border control debates have been raised about security and privacy concerns. In this paper, we present the first hardware implementation for cracking Basic Access Control (BAC) keys of the e-passport issuing schemes in Germany and the Netherlands. Our implementation was designed for the reprogrammable key search machine COPACOBANA and achieves a key search speed of 2

28

BAC keys per second. This is a speed-up factor of more than 200 if compared to previous results and allows for a runtime in the order of seconds in realistic scenarios.

Yifei Liu, Timo Kasper, Kerstin Lemke-Rust, Christof Paar
Managing Risks in RBAC Employed Distributed Environments

Role Based Access Control (RBAC) has been introduced in an effort to facilitate authorization in database systems. It introduces roles as a new layer in between users and permissions. This not only provides a well maintained access granting mechanism, but also alleviates the burden to manage multiple users. While providing comprehensive access control, current RBAC models and systems do not take into consideration the possible risks that can be incurred with role misuse. In distributed environments a large number of users are a very common case, and a considerable number of them are first time users. This fact magnifies the need to measure risk before and after granting an access. We investigate the means of managing risks in RBAC employed distributed environments and introduce a probability based novel risk model. Based on each role, we use information about user credentials, current user queries, role history log and expected utility to calculate the overall risk. By executing data mining on query logs, our scheme generates normal query clusters. It then assigns different risk levels to individual queries, depending on how far they are from the normal clusters. We employ three types of granularity to represent queries in our architecture. We present experimental results on real data sets and compare the performances of the three granularity levels.

Ebru Celikel, Murat Kantarcioglu, Bhavani Thuraisingham, Elisa Bertino
STARBAC: Spatiotemporal Role Based Access Control

Role Based Access Control (RBAC) has emerged as an important access control paradigm in computer security. However, the access decisions that can be taken in a system implementing RBAC do not include many relevant factors like user location, system location, system time, etc. We propose a spatiotemporal RBAC Model (STARBAC) which reasons in spatial and temporal domain in tandem. STARBAC control command enables or disables role based on spatiotemporal conditions. The new model is able to specify a number of different types of important access requirements not expressible in existing variations of RBAC model like GEO-RBAC and TRBAC. The specification language we present here is powerful enough to allow logical connectives like AND (∧) and OR (∨) over spatiotemporal conditions.

Subhendu Aich, Shamik Sural, A. K. Majumdar
Authentication Architecture for eHealth Professionals

This paper describes the design and implementation of a PKI-based eHealth authentication architecture. This architecture was developed to authenticate eHealth Professionals accessing RTS (Rede Telemática da Saúde), a regional platform for sharing clinical data among a set of affiliated health institutions. The architecture had to accommodate specific RTS requirements, namely the security of Professionals’ credentials, the mobility of Professionals, and the scalability to accommodate new health institutions. The adopted solution uses short lived certificates and cross-certification agreements between RTS and eHealth institutions for authenticating Professionals accessing the RTS. These certificates carry as well the Professional’s role at their home institution for role-based authorization. Trust agreements between health institutions and RTS are necessary in order to make the certificates recognized by the RTS. The implementation was based in Windows technology and as a general policy we avoided the development of specific code; instead, we used and configured available technology and services.

Helder Gomes, João Paulo Cunha, André Zúquete

Intrusion Detection

On RSN-Oriented Wireless Intrusion Detection

Robust Security Network (RSN) epitomised by IEEE 802.11i substandard is promising what it stands for; robust and effective protection for mission critical Wireless Local Area Networks (WLAN). However, despite the fact that 802.11i overhauls the IEEE’s 802.11 security standard several weaknesses still remain. In this context, the complementary assistance of Wireless Intrusion Detection Systems (WIDS) to deal with existing and new threats is greatly appreciated. In this paper we focus on 802.11i intrusion detection, discuss what is missing, what the possibilities are, and experimentally explore ways to make them intertwine and co-work. Our experiments employing well known open source attack tools and custom made software reveal that most 802.11i specific attacks can be effectively recognised, either directly or indirectly. We also consider and discuss Distributed Wireless Intrusion Detection (DIDS), which seems to fit best in RSN networks.

Alexandros Tsakountakis, Georgios Kambourakis, Stefanos Gritzalis
A Hybrid, Stateful and Cross-Protocol Intrusion Detection System for Converged Applications

Although sharing the same physical infrastructure with data networks makes convergence attractive, it also makes Voice over Internet Protocol (VoIP) networks and applications inherit all the security weaknesses of IP protocol. In addition, VoIP converged networks come with their own set of security concerns. Voice traffic on converged networks is packet switched and vulnerable to interception with the same techniques used to sniff other traffic on a LAN or WAN. Denial of Service (DoS) attacks are one of the most critical threats to VoIP due to the disruption of service and loss of revenue they cause. VoIP systems are supposed to provide the same level of security provided by traditional PSTN networks, although more functionality and intelligence are distributed to the endpoints, and more protocols are involved to provide better service. All these factors make a new design and techniques in Intrusion Detection highly needed. In this paper we propose a novel host based intrusion detection architecture for converged VoIP applications. Our architecture uses the Communicating Extended Finite State Machines formal model to provide both stateful and cross-protocol detection. In addition, it combines signature-based and specification-based detection techniques alongside combining protocol syntax and semantics anomaly detection. A variety of attacks are implemented on our test bed, and the intrusion detection prototype shows promising efficiency. The accuracy of the prototype detection is discussed and analyzed.

Bazara I. A. Barry, H. Anthony Chan
Toward Sound-Assisted Intrusion Detection Systems

Network intrusion detection has been generally dealt with using sophisticated software and statistical analysis, although sometimes it has to be done by administrators, either by detecting the intruders in real time or by revising network logs, making this a tedious and time-consuming task. To support this, intrusion detection analysis has been carried out using visual, auditory or tactile sensory information in computer interfaces. However, little is known about how to best integrate the sensory channels for analyzing intrusion detection alarms. In the past, we proposed a set of ideas outlining the benefits of enhancing intrusion detection alarms with multimodal interfaces. In this paper, we present a simplified sound-assisted attack mitigation system enhanced with auditory channels. Results indicate that the resulting intrusion detection system effectively generates distinctive sounds upon a series of simple attack scenarios consisting of denial-of-service and port scanning.

Lei Qi, Miguel Vargas Martin, Bill Kapralos, Mark Green, Miguel García-Ruiz

System and Services Security

End-to-End Header Protection in Signed S/MIME

S/MIME has been widely used to provide the end-to-end authentication, integrity and non-repudiation. S/MIME has the significant drawback that headers are unauthentic. DKIM protects specified headers, but only between the sending server and the receiver. These lead to possible impersonation attacks and profiling of the email communication, and encourage spam and phishing activities. In this paper we propose an approach to extend S/MIME to support end-to-end integrity of email headers. This approach is fully compatible with S/MIME. Under some reasonable assumption our approach can help reduce spam efficiently.

Lijun Liao, Jörg Schwenk
Estimation of Behavior of Scanners Based on ISDAS Distributed Sensors

Given independent multiple access logs, we develop a mathematical model to identify the number of malicious hosts in the current Internet. In our model, the number of malicious hosts is formalized as a function taking two inputs, namely the duration of observation and the number of sensors. Under the assumption that malicious hosts with statically assigned global addresses perform random port scans to independent sensors uniformly distributed over the address space, our model gives the asymptotic number of malicious source addresses in two ways. Firstly, it gives the cumulative number of unique source addresses in terms of the duration of observation. Secondly, it estimates the cumulative number of unique source addresses in terms of the number of sensors.

To evaluate the proposed method, we apply the mathematical model to actual data packets observed by ISDAS distributed sensors over a one-year duration from September 2004, and check the accuracy of identification of the number of malicious hosts.

Hiroaki Kikuchi, Masato Terada, Naoya Fukuno, Norihisa Doi
A Multi-core Security Architecture Based on EFI

This paper presents a unique multi-core security architecture based on EFI. This architecture combines secure EFI environment with insecure OS so that it supports secure and reliable bootstrap, hardware partition, encryption service, as well as real-time security monitoring and inspection. With this architecture, secure EFI environment provides users with a management console to authenticate, monitor and audit insecure OS. Here, an insecure OS is a general purpose OS such as Linux or Windows in which a user can perform ordinary jobs without obvious limitation and performance degradation. This architecture also has a unique capability to protect authentication rules and secure information such as encrypted data even if the security ability of an OS is compromised. A prototype was designed and implemented. Experiment and test results show great performance merits for this new architecture.

Xizhe Zhang, Yong Xie, Xuejia Lai, Shensheng Zhang, Zijian Deng

Network Security

Intelligent Home Network Authentication: Home Device Authentication Using Device Certification

The intelligent home network environment is thing which invisible computer that is not shown linked mutually through network so that user may use computer always is been pervasive. As home network service is popularized, the interest in home network security is going up. Many people interested in home network security usually consider user authentication and authorization. But the consideration about home device authentication almost doesn’t exist. In this paper, we describes home device authentication which is the basic and essential element in the home network security. We propose home device authentication, registration of certificate of home device and issuing method of certificate of home device. Our profile of certificate of home device is based on the X.509v3 certificate. And our device authentication concept can offer home network service users convenience and security.

Deok-Gyu Lee, Yun-kyung Lee, Jong-wook Han, Jong Hyuk Park, Im-Yeong Lee
Bayesian Analysis of Secure P2P Sharing Protocols

Ad hoc and peer-to-peer (P2P) computing paradigms pose a number of security challenges. The deployment of classic security protocols to provide services such as node authentication, content integrity or access control, presents several difficulties, most of them due to the decentralized nature of these environments and the lack of central authorities. Even though some solutions have been already proposed, a usual problem is how to formally reasoning about their security properties. In this work, we show how Game Theory –particularly Bayesian games– can be an useful tool to analyze in a formal manner a P2P security scheme. We illustrate our approach with a secure content distribution protocol, showing how nodes can dynamically adapt their strategies to highly transient communities. In our model, some security aspects rest on the formal proof of the robustness of the distribution protocol, while other properties stem from notions such as rationality, cooperative security, beliefs, or best-response strategies.

Esther Palomar, Almudena Alcaide, Juan M. Estevez-Tapiador, Julio C. Hernandez-Castro
Network Coding Protocols for Secret Key Distribution

Recent contributions have uncovered the potential of network coding, i.e. algebraic mixing of multiple information flows in a network, to provide enhanced security in packet oriented wireless networks. We focus on exploiting network coding for secret key distribution in a sensor network with a mobile node. Our main contribution is a set of extensions for a simple XOR based scheme, which is shown to enable pairwise keys, cluster keys, key revocation and mobile node authentication, while providing an extra line of defense with respect to attacks on the mobile node. Performance evaluation in terms of security metrics and resource utilization is provided, as well as a basic implementation of the proposed scheme. We deem this class of network coding protocols to be particularly well suited for highly constrained dynamic systems such as wireless sensor networks.

Paulo F. Oliveira, João Barros
3-Party Approach for Fast Handover in EAP-Based Wireless Networks

In this paper we present a solution for reducing the time spent on providing network access in mobile networks which involve an authentication process based on the

Extensible Authentication Protocol

. The goal is to provide fast handover and smooth transition by reducing the impact of authentication processes when mobile user changes of authenticator. We propose and describe an architecture based on a secure 3-party key distribution protocol which reduces the number of roundtrips during authentication phase, and verify its secure properties with a formal tool.

Rafa Marin, Pedro J. Fernandez, Antonio F. Gomez

Malicious Code and Code Security

SWorD– A Simple Worm Detection Scheme

Detection of fast-spreading Internet worms is a problem for which no adequate defenses exist. In this paper we present a

S

imple

Wor

m

D

etection scheme (

SWorD

).

SWorD

is designed as a statistical detection method for detecting and automatically filtering fast-spreading TCP-based worms.

SWorD

is a simple two-tier counting algorithm designed to be deployed on the network edge. The first-tier is a lightweight traffic filter while the second-tier is more selective and rarely invoked. We present results using network traces from both a small and large network to demonstrate

SWorD

’s performance. Our results show that

SWorD

accurately detects over 75% of all infected hosts within six seconds, making it an attractive solution for the worm detection problem.

Matthew Dunlop, Carrie Gates, Cynthia Wong, Chenxi Wang
Prevention of Cross-Site Scripting Attacks on Current Web Applications

Security is becoming one of the major concerns for web applications and other Internet based services, which are becoming pervasive in all kinds of business models and organizations. Web applications must therefore include, in addition to the expected value offered to their users, reliable mechanisms to ensure their security. In this paper, we focus on the specific problem of preventing cross-site scripting attacks against web applications. We present a study of this kind of attacks, and survey current approaches for their prevention. The advantages and limitations of each proposal are discussed, and an alternative solution is introduced. Our proposition is based on the use of X.509 certificates, and XACML for the expression of authorization policies. By using our solution, developers and/or administrators of a given web application can specifically express its security requirements from the server side, and require the proper enforcement of such requirements on a compliant client. This strategy is seamlessly integrated in generic web applications by relaying in the SSL and secure redirect calls.

Joaquin Garcia-Alfaro, Guillermo Navarro-Arribas
Compiler Assisted Elliptic Curve Cryptography

Although cryptographic software implementation is often performed by expert programmers, the range of performance and security driven options, as well as more mundane software engineering issues, still make it a challenge. The use of domain specific language and compiler techniques to assist in description and optimisation of cryptographic software is an interesting research challenge. Our results, which focus on Elliptic Curve Cryptography (ECC), show that a suitable language allows description of ECC based software in a manner close to the original mathematics; the corresponding compiler allows automatic production of an executable whose performance is competitive with that of a hand-optimised implementation. Our work are set within the context of CACE, an ongoing EU funded project on this general topic.

M. Barbosa, A. Moss, D. Page

Trust and Information Management

Trust Management Model and Architecture for Context-Aware Service Platforms

The entities participating in a context-aware service platform need to establish and manage trust relationships in order to assert different trust aspects including identity provisioning, privacy enforcement, and context information provisioning. Current trust management models address these trust aspects individually when in fact they are dependent on each other. In this paper we identify and analyze the trust relationships in a context-aware service platform and propose an integrated trust management model that supports quantification of trust for different trust aspects. Our model addresses a set of trust aspects that is relevant for our target context-aware service platform and is extensible with other trust aspects. We propose to calculate a resulting trust value for context-aware services, which considers the dependencies between the different trust aspects, and aims to support the users in the selection of the more trustworthy services. In this calculation we target two types of user goals: one with high priority in privacy enforcement (privacy concerned) and one with high priority in the service adaptation (service concerned). Based on our trust model we have designed a distributed trust management architecture and implemented a proof of concept prototype.

Ricardo Neisse, Maarten Wegdam, Marten van Sinderen, Gabriele Lenzini
Mobile Agent Protection in E-Business Application A Dynamic Adaptability Based Approach

The applications of mobile agent technology are various and include electronic commerce, personal assistance, parallel processing ... The use of mobile agent paradigm provides several advantages. Unfortunately, it has introduced some problems. Security represents an important issue. Current researches efforts in the area of mobile agent security follow two aspects: (i) protection of the hosts from malicious mobile agents, (ii) protection of the mobile agent from malevolent hosts. This paper focuses on the second point. It deals with the protection of mobile agent from eavesdropping attacks. The proposed approach is based on a dynamic adaptability policy supported by a reflexive architecture. The idea relies on the fact that mobile agent behave differently and in unforeseeable manner during its life cycle. This ability complicates analysis attempts and protects it. In order to show the feasibility of the proposed security strategy, we propose to illustrate it through an e-business application, implemented in Java language using Jade platform.

Salima Hacini, Zizette Boufaïda, Haoua Cheribi
Business Oriented Information Security Management – A Layered Approach

Information Security Management has become a top management priority due to a highly increasing economical dependency on information and its underlying information and communication technologies. While several efforts have been undertaken to set up physical, technical and organizational concepts to secure the information infrastructure, economic aspects have been widely neglected despite of an increasing management interest. This paper presents a layered model for managing information security with a strong economic focus by introducing a comprehensive concept which specifically links business and information security goals.

Philipp Klempt, Hannes Schmidpeter, Sebastian Sowa, Lampros Tsinas
IS 2007 PC Co-chairs’ Message

On behalf of the Program Committee of the 2nd International Symposium on Information Security (IS 2007), it was our great pleasure to welcome the participants to IS 2007, held in conjunction with OnTheMove Federated Conferences (OTM 2007), during November 25-30, 2007, in Vilamoura, Portugal. In recent years, significant advances in information security have been made throughout the world. The objective of the symposium was to promote information security related research and development activities and to encourage communication between researchers and engineers throughout the world in this area.

Mário M. Freire, Simão Melo de Sousa, Vitor Santos, Jong Hyuk Park
Backmatter
Metadaten
Titel
On the Move to Meaningful Internet Systems 2007: CoopIS, DOA, ODBASE, GADA, and IS
herausgegeben von
Robert Meersman
Zahir Tari
Copyright-Jahr
2007
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-76843-2
Print ISBN
978-3-540-76835-7
DOI
https://doi.org/10.1007/978-3-540-76843-2

Premium Partner