Skip to main content

2015 | Buch

Risks and Security of Internet and Systems

9th International Conference, CRiSIS 2014, Trento, Italy, August 27-29, 2014, Revised Selected Papers

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed post- conference proceedings of the Ninth International Conference on Risks and Security of Internet Systems, CRiSIS 2014, held in Trento, Italy, in August 2014. The 13 full papers and 6 short papers presented were selected from 48 submissions. They explore risks and security issues in Internet applications, networks and systems covering topics such as trust, security risks and threats, intrusion detection and prevention, access control and security modeling.

Inhaltsverzeichnis

Frontmatter
Detecting Anomalies in Printed Intelligence Factory Network
Abstract
Network security monitoring in ICS, or SCADA, networks provides opportunities and corresponding challenges. Anomaly detection using machine learning has traditionally performed sub-optimally when brought out of the laboratory environments and into more open networks. We have proposed using machine learning for anomaly detection in ICS networks when certain prerequisites are met, e.g. predictability.
Results are reported for validation of a previously introduced ML module for Bro NSM using captures from an operational ICS network. The number of false positives and the detection capability are reported on. Parts of the used packet capture files include reconnaissance activity.
The results point to adequate initial capability. The system is functional, usable and ready for further development. Easily modified and configured module represents a proof-of-concept implementation of introduced event-driven machine learning based anomaly detection concept for single event and algorithm.
Matti Mantere, Mirko Sailio, Sami Noponen
Context-Awareness Using Anomaly-Based Detectors for Smart Grid Domains
Abstract
Anomaly-based detection applied in strongly interdependent systems, like Smart Grids, has become one of the most challenging research areas in recent years. Early detection of anomalies so as to detect and prevent unexpected faults or stealthy threats is attracting a great deal of attention from the scientific community because it offers potential solutions for context-awareness. These solutions can also help explain the conditions leading up to a given situation and help determine the degree of its severity. However, not all the existing approaches within the literature are equally effective in covering the needs of a particular scenario. It is necessary to explore the control requirements of the domains that comprise a Smart Grid, identify, and even select, those approaches according to these requirements and the intrinsic conditions related to the application context, such as technological heterogeneity and complexity. Therefore, this paper analyses the functional features of existing anomaly-based approaches so as to adapt them, according to the aforementioned conditions. The result of this investigation is a guideline for the construction of preventive solutions that will help improve the context-awareness in the control of Smart Grid domains in the near future.
Cristina Alcaraz, Lorena Cazorla, Gerardo Fernandez
Automated Detection of Logical Errors in Programs
Abstract
Static and dynamic program analysis tools mostly focus on the detection of a priori defined defect patterns and security vulnerabilities. Automated detection of logical errors, due to a faulty implementation of applications’ functionality is a relatively uncharted territory. Automation can be based on profiling the intended behavior behind the source code. In this paper, we present a new code profiling method that combines the crosschecking of dynamic program invariants with symbolic execution, an information flow analysis, and the use of fuzzy logic. Our goal is to detect logical errors and exploitable vulnerabilities. The theoretical underpinnings and the practical implementation of our approach are discussed. We test the APP_LogGIC tool that implements the proposed analysis on two real-world applications. The results show that profiling the intended program behavior is feasible in diverse applications. We discuss the heuristics used to overcome the problem of state space explosion and of the large data sets. Code metrics and test results are provided to demonstrate the effectiveness of the approach.
George Stergiopoulos, Panagiotis Katsaros, Dimitris Gritzalis
Evaluation of Dynamic Instantiation in CPRM-Based Systems
Abstract
Context-based Parametric Relationship Models (CPRMs) reduce the complexity of working with various numbers of parameters and dependencies, by adding particular contexts to the final scheme when it is required, dynamically. In this paper the cost of including new information in CPRM is properly analysed, considering the information in the parametric trees defined for the parameters in the CPRM-based system. Some strategies for mitigating the cost of the instantiation process are proposed.
Ana Nieto
Privacy Issues in Geosocial Networks
Abstract
A GeoSocial Network (GSN) is a social network enhanced with the capability to associate user data and content with location. This content-location link is getting stronger due to the swift development of GSNs and mobile technologies. Indeed, the gathered location information generates a huge amount of publicly-available location data, information that was always considered private or at least known only by friends or family. Hence, a serious privacy threat is revealed: being tracked in real time, or having location history disclosed to everyone, is a privacy invasion that needs addressing before too much control is lost. In this paper we are interested in several questions such as: How much are we at risk? Are we vigilant enough to face this risk? Are existing privacy-protection techniques sufficient to let us relax? And if so, which technique is more efficient? Are we legally protected? This paper explores these and other related questions.
Zakaria Sahnoune, Cheu Yien Yep, Esma Aïmeur
SocialSpy: Browsing (Supposedly) Hidden Information in Online Social Networks
Abstract
Online Social Networks are becoming the most important “places” where people share information about their lives. With the increasing concern that users have about privacy, most social networks offer ways to control the privacy of the user. Unfortunately, we believe that current privacy settings are not as effective as users might think.
In this paper, we highlight this problem focusing on one of the most popular social networks, Facebook. In particular, we show how easy it is to retrieve information that a user might have set as (and hence thought as) “private”. As a case study, we focus on retrieving the list of friends for users that did set this information as “hidden” (to non-friends). We propose four different strategies to achieve this goal, and we evaluate them. The results of our thorough experiments show the feasibility of our strategies as well as their effectiveness: our approach is able to retrieve a significant percentage of the names of the “hidden” friends: i.e., some 25 % on average, and more than 70 % for some users.
Andrea Burattin, Giuseppe Cascavilla, Mauro Conti
Latent Semantic Analysis for Privacy Preserving Peer Feedback
Abstract
Today’s e-learning systems enable students to communicate with peers (or co-learners) to ask or provide feedback, leading to more efficient learning. Unfortunately, this new option comes with significantly increased risks to the privacy of the feedback requester as well as the peers involved in the feedback process. In fact, peers may unintentionally disclose personal information which may cause great threats to them like cyber-bullying, which in turn may create an unfavorable learning environment leading individuals to abandon learning. In this paper, we propose an approach to minimize data self-disclosure and privacy risks in e-learning contexts. It consists first of mining peers’ feedback to remove negative comments (reducing bullying and harassment) based on machine learning classifier and natural language processing techniques. Second, it consists of striping sentences that potentially reveal personal information in order to protect learners from self-disclosure risks, based on Latent Semantic Analysis (LSA).
Mouna Selmi, Hicham Hage, Esma Aïmeur
Attacking Suggest Boxes in Web Applications Over HTTPS Using Side-Channel Stochastic Algorithms
Abstract
Web applications are subject to several types of attacks. In particular, side-channel attacks consist in performing a statistical analysis of the web traffic to gain sensitive information about a client. In this paper, we investigate how side-channel leaks can be used on search engines such as Google or Bing to retrieve the client’s search query. In contrast to previous works, due to payload randomization and compression, it is not always possible to uniquely map a search query to a web traffic signature and hence stochastic algorithms must be used. They yield, for the French language, an exact recovery of search word in more than \(30\) % of the cases. Finally, we present some methods to mitigate such side-channel leaks.
Alexander Schaub, Emmanuel Schneider, Alexandros Hollender, Vinicius Calasans, Laurent Jolie, Robin Touillon, Annelie Heuser, Sylvain Guilley, Olivier Rioul
Location–Aware RBAC Based on Spatial Feature Models and Realistic Positioning
Abstract
The location of a mobile user presents valuable information when deriving access control decisions. Hence, several location–aware extensions to role–based access control (RBAC) exist in literature. However, these approaches do not consider positioning errors. This leads to unexpected security breaches, when the user’s ground truth differs from the reported location. Further, most approaches simply define a polygon as authorized zone and authorize when the reported position lies inside. To overcome these limitations, this paper presents a risk–optimal approach to RBAC. Position estimates are represented as probability distributions instead of points. Location constraints are assigned to RBAC elements and include cost functions for false positive and false negative decisions as well as feature models, which replace traditionally used polygons. Feature models describe for each location the likelihood that a specific feature can be observed. The evaluation shows that such risk–optimal RBAC outperforms risk–ignoring, polygon–based approaches. However, this risk–optimality is bought at the expense of a runtime highly increasing with the number of applied location constraints.
Philipp Marcus, Lorenz Schauer, Claudia Linnhoff–Popien
Inter-technology Conflict Analysis for Communication Protection Policies
Abstract
Usually network administrators implement a protection policy by refining a set of (abstract) communication security requirements into configuration settings for the security controls that will provide the required protection. The refinement consists in evaluating the available technologies that can enforce the policy at node and network level, selecting the most suitable ones, and possibly making fine adjustments, like aggregating several individual channels into a single tunnel. The refinement process is a sensitive task which can lead to incorrect or suboptimal implementations, that in turn affect the overall security, decrease the network throughput and increase the maintenance costs. In literature, several techniques exist that can be used to identify anomalies (i.e. potential incompatibilities and redundancies among policy implementations. However, these techniques usually focus only on a single security technology (e.g. IPsec) and overlook the effects of multiple overlapping protection techniques. This paper presents a novel classification of communication protection policy anomalies and a formal model which is able to detect anomalies among policy implementations relying on technologies that work at different network layers. The result of our analysis allows administrators to have a precise insight on the various alternative implementations, their relations and the possibility of resolving anomalies, thus increasing the overall security and performance of a network.
Cataldo Basile, Daniele Canavese, Antonio Lioy, Fulvio Valenza
Two-Level Automated Approach for Defending Against Obfuscated Zero-Day Attacks
Abstract
A zero-day attack is one that exploits a vulnerability for which no patch is readily available and the developer or vendor may or may not be aware. They are very expensive and powerful attack tools to defend against. Since the vulnerability is not known in advance, there is no reliable way to guard against zero-day attacks before they happen. Attackers take advantage of the unknown nature of zero-day exploits and use them in conjunction with highly sophisticated and targeted attacks to achieve stealthiness with respect to standard intrusion detection techniques. This paper presents a novel combination of anomaly, behavior and signature based techniques for detecting such zero-day attacks. The proposed approach detects obfuscated zero-day attacks with two-level evaluation, generates a new signature automatically and updates other sensors by using push technology via global hotfix feature.
Ratinder Kaur, Maninder Singh
Practical Attacks on Virtual Worlds
Abstract
Virtual Worlds (VWs) are immensely popular online environments, where users interact in real-time via digital beings (avatars). However, a number of security issues affect VWs, and they are vulnerable to a range of attacks on their infrastructure and communications channels. Their powerful architecture can also be used to mount attacks against live Real World servers, by using malicious VW objects. Researching these attacks in commercial VWs would not be acceptable, as it would be contrary to the terms of conditions which govern acceptable behaviour in a particular VW. So in this paper, attacks were conducted/analysed in a laboratory-based test bed VW implementation developed specifically for the research, with custom built attack and analysis tools: commercial VWs were used for data gathering only. Results of these experiments are presented, and appropriate countermeasures proposed which could reduce the likelihood of the attacks succeeding in live VWs.
Graham Hili, Sheila Cobourne, Keith Mayes, Konstantinos Markantonakis
TabsGuard: A Hybrid Approach to Detect and Prevent Tabnabbing Attacks
Abstract
Phishing is one of the most prevalent types of modern attacks, costing significant financial losses to enterprises and users each day. Despite the emergence of various anti-phishing tools, not only there has been a dramatic increase in the number of phishing attacks but also more sophisticated forms of these attacks have come into existence. One of these forms of phishing attacks is the tabnabbing attack. Tabnabbing takes advantage of the user’s trust and inattention to the open tabs in the browser and changes the appearance of an already open malicious page to the appearance of a trusted website. The existing tabnabbing detection and prevention techniques block scripts that are susceptible to perform malicious actions or violate the browser security policy. However, most of these techniques cannot effectively prevent the script-free variant of the tabnabbing attack. In this paper, we introduce TabsGuard, an approach that combines heuristics and a machine-learning technique to keep track of the major changes made to the layout of a webpage whenever a tab loses its focus. TabsGuard is developed as a browser extension and evaluated against the top 1,000 trusted websites from Alexa. The results of our evaluation convey a significant improvement over the existing techniques. Finally, TabsGuard can be deployed as an extension service as a viable means for protecting against tabnabbing attacks.
Hana Fahim Hashemi, Mohammad Zulkernine, Komminist Weldemariam
Towards a Full Support of Obligations in XACML
Abstract
Policy-based systems rely on the separation of concerns, by implementing independently a software system and its associated security policy.
XACML (eXtensible Access Control Markup Language) proposes a conceptual architecture and a policy language to reflect this ideal design of policy-based systems.However, while rights are well-captured by authorizations, duties, also called obligations, are not well managed by XACML architecture. The current version of XACML lacks (1) well-defined syntax to express obligations and (2) an unified model to handle decision making w.r.t. obligation states and the history of obligations fulfillment/violation. In this work, we propose an extension of XACML reference model that integrates obligation states in the decision making process. We have extended XACML language and architecture for a better obligations support and have shown how obligations are managed in our proposed extended XACML architecture: OB-XACML.
Donia El Kateb, Yehia ElRakaiby, Tejeddine Mouelhi, Iram Rubab, Yves Le Traon
Managing Heterogeneous Access Control Models Cross-Organization
Abstract
Business process collaboration has gained a lot of attention due to the great need for integrating business process of different organizations. The most suitable issue to secure this collaboration is using access control model. However access control model diversity makes it more complex to collaborate cross-organization, especially when each organization refuses to change its security policies, prefers to preserve its access control model and needs to protect its information assets. To meet this problem we propose a flexible architecture based on Attribute Based Access Control (ABAC) model to ensure heterogeneity of access control cross-organization and on specified collaboration contract between these organizations. To validate our approach we have used web services technology, and we have implemented a prototype based on open source platforms WSO2.
Samira Haguouche, Zahi Jarir
ISER: A Platform for Security Interoperability of Multi-source Systems
Abstract
Multi-source systems have become a crucial infrastructure for the organization of modern information systems. This distributed environment enables the different participants to collaborate, exchange data and interact among them in order to achieve a global goal. However, some security issues such as the malicious use of resources, disclosure of data or bad services can appear during this collaboration.
In this paper, a new platform is proposed that ensures secure interoperability between multi-source systems. It is based on the choice, the integration and the update of three existing tools in order to (1) provide a secure virtualization of guests system, (2) create, model and manage systems secure interoperability, (3) verify the security policies and (4) monitor the system behavior. A case study is presented to illustrate the application of the platform.
Khalifa Toumi, Fabien Autrel, Ana Cavalli, Sammy Haddad
Key Extraction Attack Using Statistical Analysis of Memory Dump Data
Abstract
During the execution of a program the keys for encryption algorithms are in the random access memory (RAM) of the machine. Technically, it is easy to extract the keys from a dumped image of the memory. However, not many examples of such key extractions exist, especially during program execution. In this paper, we present a key extraction technique and confirm its effectiveness by implementing the Process Peeping Tool (PPT) – an analysis tool – that can dump the memory during the execution of a target program and help the attacker deduce the encryption keys through statistical analysis of the memory contents. Utilising this tool, we evaluate the security of two sample programs, which are built on top of the well-known OpenSSL library. Our experiments show that we can extract both the private key of the RSA asymmetric cipher as well as the secret key of the AES block cipher.
Yuto Nakano, Anirban Basu, Shinsaku Kiyomoto, Yutaka Miyake
How Robust is the Internet? – Insights from Graph Analysis
Abstract
The importance of the Internet as todays communication and information medium cannot be underestimated. Reduced Internet reliability can lead to significant financial losses for businesses and economies. But how robust is the Internet with respect to failures, accidents, and malicious attacks? We will investigate this question from the perspective of graph analysis. First, we develop a graph model of the Internet at the level of Autonomous Systems based on empirical data. Then, a global assessment of Internet robustness is conducted with respect to several failure and attack modes. Our results indicate that even today the Internet could be very vulnerable to smart attack strategies.
Annika Baumann, Benjamin Fabian
Regularity Based Decentralized Social Networks
Abstract
Centralized online social networks (OSNs) have drawbacks, chief among which are the risks posed to the security and privacy of the information maintained by them; and the loss of control over the information contributed by their members. The attempts to create decentralized OSNs (DOSNs) enable each member of an OSN keeps its own data under its control, instead of surrendering it to a central place; providing its own access-control policy. However, they are unable to subject the membership of a DOSN, and the interaction between its members, to any global policy. We adopt the decentralization, complementing it with a means for scalably specifying and enforcing regularities over the membership of a community, and over the interaction between its members.
Zhe Wang, Naftaly H. Minsky
Online Privacy: Risks, Challenges, and New Trends
Abstract
Being on the Internet implies constantly sharing information, personal or not. Nowadays, preserving privacy is not an easy feat: technology is growing too fast, leaving legislation far behind and the level of security awareness is insufficient. Websites and Internet services are collecting personal data with or without the knowledge or consent of users. Not only does new technology readily provide an abundance of methods for organizations to gather and store information, people are also willingly sharing data with increasing frequency, exposing their intimate lives on social media websites. Online data brokers, search engines, data aggregators, geolocation services and many other actors on the web are monetizing our online presence for their own various purposes. Similarly, current technologies including digital devices such as smartphones, tablets, cloud computing/SaaS, big data, BYOD are posing serious problems for individuals and businesses alike. Data loss is now a common event and the consequences are exceedingly damaging. Although there are means at our disposal to limit or at least acknowledge how and what we’re sharing, most do not avail themselves of these tools and so the current situation remains unacceptable. Many privacy enhancing technologies (PETs) have been available for some time, but are not effective enough to prevent re-identification and identity theft.
Esma Aïmeur
Data Anonymization
Abstract
Database privacy means different things depending on the context. Here we deal with protecting the privacy of data subjects/respondents by anonymizing their data records: the scenario is a data collector who wants to release useful information while preserving the privacy of data subjects/respondents. We consider the various types of data releases, analyze their privacy implications and review the statistical disclosure control techniques in use.
Josep Domingo-Ferrer, Jordi Soria-Comas
Security of the Android Operating System
Abstract
Modern smartphones become an everyday part of our life. Checking emails, browsing the Internet, photographing, navigation are successfully carried out with the help of smartphones. Obviously, this happens because mobile phones have been provided with the useful functions.
Yury Zhauniarovich
Backmatter
Metadaten
Titel
Risks and Security of Internet and Systems
herausgegeben von
Javier Lopez
Indrajit Ray
Bruno Crispo
Copyright-Jahr
2015
Electronic ISBN
978-3-319-17127-2
Print ISBN
978-3-319-17126-5
DOI
https://doi.org/10.1007/978-3-319-17127-2