Skip to main content

Über dieses Buch

This volume constitutes the refereed proceedings of two workshops: the International Cross-Domain Conference and Workshop on Availability, Reliability and Security, CD-ARES 2014, and the 4th International Workshop on Security and Cognitive Informatics for Homeland Defense, SeCIHD 2014, co-located with the International Conference on Availability, Reliability and Security, ARES 2014, held in Fribourg, Switzerland, in September 2014. The 23 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers deal with knowledge management, software security, mobile and social computing, enterprise information systems, homeland security and information processing.



Cross-Domain Conference and Workshop on Multidisciplinary Research and Practice for Information Systems (CD-ARES 2014)

Knowledge Management

Argumentation-Based Group Decision Support for Collectivist Communities

In collectivist communities, decisions are taken by groups of people who prefer to consider the opinions of their in-group. For them it is important to reach group consensus by focusing on the group’s preferences and goals. Such decision processes can be supported by multi-criteria decision analysis that identifies sets of objectives, representing subjective values used in decision making, to better generate recommendations. Recently, several attempts have been made to explain and suggest alternatives in decision-making problems by using arguments. Argumentation theory is the process of bringing together arguments so that conclusions can be justified and explained. Each potential decision usually has arguments for or against it, of various strengths. For collectivist communities, the non-monotonicity of argumentation theory is useful as it supports an adaptive decision-making style. The fact that the opinions of group members can be evaluated and replaced, if they are found lacking via a group opinion strategy, fits well with collectivist decision-making. This paper proposes a framework that allows a group of users, belonging to a collectivist and mostly rural community, to share their opinions when making decisions such as buying goods in bulk in order by incorporating their cultural beliefs in the system design.
Marijke Coetzee

A Knowledge Integration Approach for Safety-Critical Software Development and Operation Based on the Method Architecture

It is necessary to integrate practical software development and operation body of knowledge to deploy development and operation methods for assuring safety. In this paper, an approach based on the method architecture is proposed to develop a Knowledge integration method for describing various software related bodies of knowledge and the safety case for assuring software life cycle and operation processes.
Shuichiro Yamamoto

Metrics-Based Incremental Determinization of Finite Automata

Some application domains, including monitoring of active systems in artificial intelligence and model-based mutation testing in software engineering, require determinization of finite automata to be performed incrementally. To this end, an algorithm called Incremental Subset Construction (ISC) was proposed a few years ago. However, this algorithm was recently discovered to be incorrect is some instance problems. The incorrect behavior of ISC originates when the redirection of a transition causes a portion of the automaton to be disconnected from the initial state. This misbehavior is disturbing in two ways: portions of the resulting automaton are disconnected and, as such, useless; moreover, a considerable amount of computation is possibly wasted for processing these disconnected parts. To make ISC sound, a metrics-based technique is proposed in this paper, where the distance between states is exploited in order to guarantee the connection of the automaton, thereby allowing ISC to achieve soundness. Experimental results show that, besides being effective, the proposed technique is efficient too.
Sergiu I. Balan, Gianfranco Lamperti, Michele Scandale

Software Security

Towards Developing Secure Software Using Problem-Oriented Security Patterns

Security as one essential quality requirement has to be addressed during the software development process. Quality requirements such as security drive the architecture of a software, while design decisions such as security patterns on the architecture level in turn might constrain the achievement of quality requirements significantly. Thus, to obtain sound architectures and correct requirements, knowledge which is gained in the solution space, for example from security patterns, should be reflected in the requirements engineering. In this paper, we propose an iterative method that takes into account the concurrent development of requirements and architecture descriptions systematically. It reuses security patterns for refining and restructuring the requirement models by applying problem-oriented security patterns. Problem-oriented security patterns adapt existing security patterns in a way that they can be used in the problem-oriented requirements engineering. The proposed method bridges the gap between security problems and security architectural solutions.
Azadeh Alebrahim, Maritta Heisel

Visual Analytics for Detecting Anomalous Activity in Mobile Money Transfer Services

Mobile money transfer services (MMTS) are currently being deployed in many markets across the world and are widely used for domestic and international remittances. However, they can be used for money laundering and other illegal financial operations. The paper considers an interactive multi-view approach that allows describing metaphorically the behavior of MMTS subscribers according to their transaction activities. The suggested visual representation of the MMTS users’ behavior based on the RadViz visualization technique helps to identify groups with similar behavior and outliers. We describe several case studies corresponding to the money laundering and behavioral fraud. They are used to assess the efficiency of the proposed a pproach as well as present and discuss the results of experiments.
Evgenia Novikova, Igor Kotenko

A Review of Security Requirements Engineering Methods with Respect to Risk Analysis and Model-Driven Engineering

One of the most important aspects that help improve the quality and cost of secure information systems in their early stages of the development lifecycle is Security Requirements Engineering (SRE). However, obtaining such requirements is non-trivial. One domain dealing also with eliciting security requirements is Risk Analysis (RA). Therefore, we perform a review of SRE methods in order to analyse which ones are compatible with RA processes. Moreover, the transition from these early security requirements to security policies at later stages in the lifecycle is generally non-automatic, informal and incomplete. To deal with such issues, model-driven engineering (MDE) uses formal models and automatic model transformations. Therefore, we also review which SRE methods are compatible with MDE approaches. Consequently, our review is based on criteria derived partially from existing survey works, further enriched and specialized in order to evaluate the compatibility of SRE methods with the disciplines of RA and MDE. It summarizes the evidence regarding this issue so as to improve understanding and facilitate evaluating and selecting SRE methods.
Denisse Muñante, Vanea Chiprianov, Laurent Gallon, Philippe Aniorté

Adaptive User-Centered Security

One future challenge in informatics is the integration of humans in an infrastructure of data-centric IT services. A critical activity of this infrastructure is trustworthy information exchange to reduce threats due to misuse of (personal) information. Privacy by Design as the present methodology for developing privacy-preserving and secure IT systems aims to reduce security vulnerabilities already in the early requirement analysis phase of software development. Incident reports show, however, that not only an implementation of a model bears vulnerabilities but also the gap between rigorous view of threat and security model on the world and real view on a run-time environment with its dependencies. Dependencies threaten reliability of information, and in case of personal information, privacy as well. With the aim of improving security and privacy during run-time, this work proposes to extend Privacy by Design by adapting an IT system not only to inevitable security vulnerabilities but in particular to their users’ view on an information exchange and its IT support with different, eventually opposite security interests.
Sven Wohlgemuth

Mobile and Social Computing

Mobile Computing is not Always Advantageous: Lessons Learned from a Real-World Case Study in a Hospital

The use of mobile computing is expanding dramatically in recent years and trends indicate that “the future is mobile”. Nowadays, mobile computing plays an increasingly important role in the biomedical domain, and particularly in hospitals. The benefits of using mobile devices in hospitals are no longer disputed and many applications for medical care are already available. Many studies have proven that mobile technologies can bring various benefits for enhancing information management in the hospital. But is mobility a solution for every problem?
In this paper, we will demonstrate that mobility is not always an advantage. On the basis of a field study at the pediatric surgery of a large University Hospital, we have learned within a two-year long mobile computing project, that mobile devices have indeed many disadvantages, particularly in stressful and hectic situations and we conclude that mobile computing is not always advantageous.
Andreas Holzinger, Bettina Sommerauer, Peter Spitzer, Simon Juric, Borut Zalik, Matjaz Debevc, Chantal Lidynia, André Calero Valdez, Carsten Roecker, Martina Ziefle

Towards Interactive Visualization of Longitudinal Data to Support Knowledge Discovery on Multi-touch Tablet Computers

A major challenge in modern data-centric medicine is the increasing amount of time-dependent data, which requires efficient user-friendly solutions for dealing with such data. To create an effective and efficient knowledge discovery process, it is important to support common data manipulation tasks by creating quick, responsive and intuitive interaction methods. In this paper we describe some methods for interactive longitudinal data visualization with focus on the usage of mobile multi-touch devices as interaction medium, based on our design and development experiences. We argue that when it comes to longitudinal data this device category offers remarkable additional interaction benefits compared to standard point-and-click desktop computer devices. An important advantage of multi-touch devices arises when interacting with particularly large longitudinal data sets: Complex, coupled interactions such as zooming into a region and scrolling around almost simultaneously is more easily achieved with the possibilities of a multi-touch device than compared to a regular mouse-based interaction device.
Andreas Holzinger, Michael Schwarz, Bernhard Ofner, Fleur Jeanquartier, Andre Calero-Valdez, Carsten Roecker, Martina Ziefle

Semantic-Aware Mashups for Personal Resources in SemanticLIFE and SocialLIFE

SemanticLIFE is a Semantic Desktop system, which deals with the personal lifetime data. However, SemanticLIFE is limited to local storage, which is an isolated data repository. In the recent years, people have the tendency to share their resources, which are not only stored locally on their personal computers, but also hosted on social networking sites (SNSs). We propose and use the term ‘SocialLIFE’ to denote one’s lifetime information in SNSs, in which personal resources are his/her activities, interests, and related connections. In this paper, we also propose a mashup language and a semantic-based mashup framework. The final goal of this research is to provide a semantic-based way for bridging the gap between SemanticLIFE and SocialLIFE in order to integrate and reuse existing personal resources of existing applications such as information resources of Semantic Desktops and SNSs. The proposed mashup system also aims to supports non-expert users to create data mashups based on semantic-aware mashup dataflow.
Sao-Khue Vo, Amin Anjomshoaa, A. Min Tjoa

4th International Workshop on Security and Cognitive Informatics for Homeland Defense (SeCIHD 2014)

Trust Extension Protocol for Authentication in Networks Oriented to Management (TEPANOM)

Future Internet of Things is being deployed massively, since it is being already concerned deployments with thousands of nodes, which present a new dimension of capacities for monitoring solutions such as smart cities, home automation, and continuous healthcare. This new dimension is also presenting new challenges, in issues related with scalability, security and management, which require to be addressed in order to make feasible the Internet of Things-based solutions. This work presents a Trust Extension Protocol for Authentication in Networks Oriented to Management (TEPANOM). This protocol allows, on the one hand, the identity verification and authentication in the system, and on the other hand the bootstrapping, configuration and trust extension of the deployment and management domains to the new device. Thereby, TEPANOM defines a scalable network management solution for the Internet of Things, which addresses the security requirements, and allows an easy, and transparent support for the management, which are highly desirable and necessary features for the successful of the solutions based on the Internet of things. The proposed protocol has been instanced for the use case of a fire alarm management system, and successfully evaluated with the tools from the Automated Validation of Internet Security Protocols and Applications (AVISPA) framework.
Antonio J. Jara

Building an Initialization Cipher Block with Two-Dimensional Operation and Random Parameters

In recent years, parallel computing capabilities have been more powerful than before. Consequently some block cipher standards, such as DES used to protect important electronic messages, have been cracked in the past years. Also due to the rapid development of hardware processing speeds, 3DES and AES may someday be solved by brute-force attacks. Basically, the common characteristics of these block cipher standards are that each time, when a standard is invoked, the same parent key is used to generate subkeys. The subkeys are then utilized in the standard’s encryption rounds to encrypt data. In fact, the variability of the key values is quite limited. Generally, producing random parameters to encrypt data is an effective method to improve the security of ciphertext. But how to ensure the security level of using and delivering these random parameters and how to avoid information leakage have been a challenge. So in this paper, we propose a novel random parameter protection approach, called the Initialization Cipher Block Method(ICBM for short), which protects random parameters by using a two-dimensional operation and employs random parameters to change the value of a fixed parent key for block ciphering, thus lowering the security risk of a block cipher algorithm. Security analysis demonstrates that the ICBM effectively improve the security level of a protected system. Of course, this also safely protect our homeland, particularly when it is applied to our governmental document delivery systems.
Yi-Li Huang, Fang-Yie Leu, Ilsun You, Jing-Hao Yang

Crypto-Biometric Models for Information Secrecy

In this paper will be presented some advances in crypto-biometric procedures used for encryption and division of secret data, as well as modern approaches for strategic management of divided information. Computer techniques for secret information sharing aim to secure information against disclosure to unauthorized persons. The paper will present algorithms dedicated for information division and sharing on the basis of biometric or personal features. Computer techniques for classified information sharing should also be useful in the process of shared information generation and distribution. For this purpose there will be presented a new approach for information management based on cognitive systems.
Marek R. Ogiela, Lidia Ogiela, Urszula Ogiela

One-Time Biometrics for Online Banking and Electronic Payment Authentication

Online banking and electronic payment systems on the Internet are becoming increasingly advanced. On the machine level, transactions take place between client and server hosts through a secure channel protected with SSL/TLS. User authentication is typically based on two or more factors. Nevertheless, the development of various malwares and social engineering attacks transform the user’s PC in an untrusted device and thereby making user authentication vulnerable. This paper investigates how user authentication with biometrics can be made more robust in the online banking context by using a specific device called OffPAD. This context requires that authentication is realized by the bank and not only by the user (or by the personal device) contrary to standard banking systems. More precisely, a new protocol for the generation of one-time passwords from biometric data is presented, ensuring the security and privacy of the entire transaction. Experimental results show an excellent performance considering with regard to false positives. The security analysis of our protocol also illustrates the benefits in terms of strengthened security.
Aude Plateaux, Patrick Lacharme, Audun Jøsang, Christophe Rosenberger

Expert Knowledge Based Design and Verification of Secure Systems with Embedded Devices

The sweeping growth of the amount of embedded devices together with their extensive spread pose extensively new design challenges for protection of embedded systems against a wide set of security threats. The embedded device specificity implies combined protection mechanisms require effective resource consumption of their software/hardware modules. At that the design complexity of modern embedded devices, characterized by the proper security level and acceptable resource consumption, is determined by a low structuring and formalization of security knowledge. The paper proposes an approach to elicit security knowledge for subsequent use in automated design and verification tools for secure systems with embedded devices.
Vasily Desnitsky, Igor Kotenko

PrivacyFrost2: A Efficient Data Anonymization Tool Based on Scoring Functions

In this paper, we propose an anonymization scheme for generating a k-anonymous and l-diverse (or t-close) table, which uses three scoring functions, and we show the evaluation results for two different data sets. Our scheme is based on both top-down and bottom-up approaches for full-domain and partial-domain generalization, and the three different scoring functions automatically incorporate the requirements into the generated table. The generated table meets users’ requirements and can be employed in services provided by users without any modification or evaluation.
Shinsaku Kiyomoto, Yutaka Miyake

Detection of Malicious Web Pages Using System Calls Sequences

Web sites are often used for diffusing malware; an increasingly number of attacks are performed by delivering malicious code in web pages: drive-by download, malvertisement, rogueware, phishing are just the most common examples. In this scenario, JavaScript plays an important role, as it allows to insert code into the web page that will be executed on the client machine, letting the attacker to perform a plethora of actions which are necessary to successfully accomplish an attack. Existing techniques for detecting malicious JavaScript suffer from some limitations like: the capability of recognizing only known attacks, being tailored only to specific attacks, or being ineffective when appropriate evasion techniques are implemented by attackers. In this paper we propose to use system calls to detect malicious JavaScript. The main advantage is that capturing the system calls allows a description of the attack at a very high level of abstraction. On the one hand, this limits the evasion techniques which could succeed, and, on the other hand, produces a very high detection accuracy (96%), as experimentation demonstrated.
Gerardo Canfora, Eric Medvet, Francesco Mercaldo, Corrado Aaron Visaggio

Risk Reduction Overview

A Visualization Method for Risk Management
The Risk Reduction Overview (RRO) method presents a comprehensible overview of the coherence of risks, measures and residual risks. The method is designed to support communication between different stakeholders in complex risk management. Seven reasons are addressed why risk management in IT security has many uncertainties and fast changing factors, four for IT security in general and three for large organizations specifically. The RRO visualization has been proven valuable to discuss, optimize, evaluate, and audit a design or a change in a complex environment. The method has been used, evaluated, and improved over the last six years in large government and military organizations. Seven areas in design and decision making are identified in which a RRO is found to be beneficial. Despite the widely accepted need for risk management we believe this is the first practical method that delivers a comprehensive overview that improves communication between different stakeholders.
Hellen Nanda Janine Havinga, Olivier Diederik Theobald Sessink

Towards Analysis of Sophisticated Attacks, with Conditional Probability, Genetic Algorithm and a Crime Function

In this short article, a proposal to simulate a sophisticated attack on a technical infrastructure is discussed. Attacks on (critical) infrastructures can be modeled with attack trees, but regular (normal) attack trees have some limitation in the case of a sophisticated attack like an advanced persistent (sophisticated) attack. Furthermore, attacks can also be simulated to understand the type of attack, and in order to subsequently develop targeted countermeasures. In this case, a normal, and also a sophisticated attack, is typically carried out in three phases. In the first phase (I) extensive information is gathered about the target object. In the second phase (II), the existing information is verified with a target object scan. In the third phase (III), the actual attack takes place. A normal attack tree is not able to explain this kind of attack behavior. So, we advanced a normal attack tree, which uses conditional probability according to Bayes to go through a certain path - step by step - from the leaf to the root. The learning ability, which typically precedes an attack (phase II), is simulated using a genetic algorithm. To determine the attack, we used threat trees and threat actors. Threat actors are weighted by a function that is called criminal energy. In a first step, it proposes three types of threat actors. The vulnerabilities have been identified as examples for a laboratory network.
Wolfgang Boehmer

A High-Speed Network Content Filtering System

Current software based Content Filtering Systems are too computing intensive in large scale packets payload detection and cannot meet the performance requirements of modern networks. Thus, hardware architectures are desired to speed up the detection process. In this paper, hardware based Conjoint Network Content Filtering System (CNCFS) is proposed to solve the problem. In CNCFS, a TCAM based algorithm named Linking Shared Multi-Match (LSMM) is implemented, which can speed up large scale Multi-Pattern Multi-Matching greatly. Also, this system can also be used in high speed mobile networks which need to deal with the security of fast handover of mobile users. The results of performance evaluation show that our solution can provide 5 Gbps wire speed processing capability.
Guohong Zhao, Shuhui Chen, Baokang Zhao, Ilsun You, Jinshu Su, Wanrong Yu

Feature Grouping for Intrusion Detection System Based on Hierarchical Clustering

Intrusion detection is very important to solve an increasing number of security threats. With new types of attack appearing continually, traditional approaches for detecting hazardous contents are facing a severe challenge. In this work, a new feature grouping method is proposed to select features for intrusion detection. The method is based on agglomerative hierarchical clustering method and is tested against KDD CUP 99 dataset. Agglomerative hierarchical clustering method is used to construct a hierarchical tree and it is combined with mutual information theory. Groups are created from the hierarchical tree by a given number. The largest mutual information between each feature and a class label within a certain group is then selected. The performance evaluation results show that better classification performance can be attained from such selected features.
Jingping Song, Zhiliang Zhu, Chris Price

Towards a Key Consuming Detection in QKD-VoIP Systems

Quantum Key Distribution (QKD) technology, based on laws of quantum physics, can generate unconditional security keys between two communication parties. QKD is nearly a commercial technology and can make it available to the public. In existing QKD networks and commercial QKD systems, classical network is an essential part of the implementation of QKD protocols. With security keys and encryption scheme (one-time pad), we can protect the security of various network applications. But the public classical channel in QKD network may suffers potential key consuming attacks. In this paper, we focus on how to detect the potential attacks during the security applications in the QKD network. Especially, we propose a Dynamic Key Consuming Detection scheme (DKode) in QKD-VoIP systems which encrypting VoIP streams with security keys from QKD systems.
Guohong Zhao, Wanrong Yu, Baokang Zhao, Chunqing Wu

A Structure P2P Based Web Services Registry with Access and Control

In the cloud computing, there are massive functions and resources are encapsulated into Web services. The traditional web service registry systems normally using the central architecture can’t meet the requirements of cloud computing. A web service registry system based on structured P2P system with secure access and control is implemented. Multiple Universal Description, Discovery and Integration (UDDI) nodes are organized by the P2P based schedule and communication mechanism. The registration and discovery of Web services is redesigned to the new system that provides services like one single UDDI server. The experiment results show that the capacity can be extended dynamically and support large scalable access.
He Qian, Zhao Baokang, Long Yunjian, Su Jinshu, Ilsun You

Amplification DDoS Attacks: Emerging Threats and Defense Strategies

There are too many servers on the Internet that have already been used, or that are vulnerable and can potentially be used to launch DDoS attacks. Even though awareness increases and organizations begin to lock down those systems, there are plenty of other protocols that can be exploited to be used instead of them. One example is the Simple Network Management Protocol (SNMP), which is a common UDP protocol used for network management. Several types of network devices actually come with SNMP ”on” by default. A request sent to an SNMP server returns a response that is larger than the query that came in.
The main aim of this paper is to investigate on the increasing prevalence and destructive power of amplification-based distributed denial of service (DDoS) attacks in order to present a solution based on a profiling methodology. The paper encompass three aspects: amplification DDoS attacks and main port used, the profiling methodology as a mean of identifying the threat and shape it. Finally, a proposal solution is given by considering both strategic and technical aspects.
Antonio Colella, Clara Maria Colombini


Weitere Informationen

Premium Partner