Skip to main content

2012 | Buch

Networked Digital Technologies

4th International Conference, NDT 2012, Dubai, UAE, April 24-26, 2012. Proceedings, Part I

insite
SUCHEN

Über dieses Buch

This two-volume-set (CCIS 293 and CCIS 294) constitutes the refereed proceedings of the International Conference on Networked Digital Technologies, NDT 2012, held in Dubai, UAE, in April 2012. The 96 papers presented in the two volumes were carefully reviewed and selected from 228 submissions. The papers are organized in topical sections on collaborative systems for e-sciences; context-aware processing and ubiquitous systems; data and network mining; grid and cloud computing; information and data management; intelligent agent-based systems; internet modeling and design; mobile, ad hoc and sensor network management; peer-to-peer social networks; quality of service for networked systems; semantic Web and ontologies; security and access control; signal processing and computer vision for networked systems; social networks; Web services.

Inhaltsverzeichnis

Frontmatter

Collaborative Systems for E-Sciences

Developing Casual Learning Games Using the Apache Pivot IIA Capabilities

Game-based learning is becoming a popular academic necessity for the 21st century education. However, many challenges and obstacles are still facing facilitators to fully incorporate games into the educational processes and how to efficiently deliver the games to the students. This paper introduces the Apache Pivot as a simplified environment for developing casual learning games. Two steps have been taken to demonstrate the simplicity of using the Pivot environment for building educational games. The first involves the development of generic dashboard that can be used by the second step to generate specific educational games like the learning crossword.

Jinan Fiaidhi, Michael D. Rioux, Sabah Mohammed, Tai hoon Kim
Remote Robotic Laboratory Experiment Platform Based on Tele-programming

Remote Laboratory Experimentation is a technique used in modern engineering laboratories to help academic researchers and students perform laboratory experiments remotely through the internet. Remote control of experiments is gaining more importance in training and education. However, remote real-time training on instruments programming still have some unresolved problems such as error management. In this paper, a platform for training students on remote robotic experiment control through tele-programming is presented. Programming sessions can be done by the trainee at many levels of control with built-in error management in order to avoid system freezing or malfunction. We also present a real experiment: programming navigation control of a mobile robot in the presence of obstacles using fuzzy control.

Chadi Riman, Eric Monacelli, Imad Mougharbel, Ali El-Hajj
Understanding Simple Stories through Concepts Extraction and Multimedia Elements

Teaching children with intellectual disabilities is a challenging task. Instructors use different methodologies and techniques to introduce the concepts of lessons to the children, motivate them and keep them engaged. These methodologies include reading texts, showing pictures and images, touching items, taking children to sites to see and understand, and even using the taste and smell senses (i.e., hot, cold). The objective of this work is to develop a system that can assist the children with special needs to improve their understanding of simple stories related to the animals and foods domain through multimedia technology. We use formal concepts analysis and a simple ontology to extract the keywords representing characters, actions, and objects from the story text and link them with the corresponding multimedia elements (i.e., images, sounds and clips). These elements are retrieved by querying Google database using Google APIs features. The instructors would have to validate the obtained results and select what is appropriate for the children in the classroom. The system allows the instructor to input the story text and get as output the multimedia story.

Masoud Udi Mwinyi, Sahar Ahmad Ismail, Jihad M. Alja’am, Ali M. Jaoua

Context-Aware Processing and Ubiquitous Systems

Extended UML for the Development of Context-Aware Applications

In a pervasive environment, systems and applications are influenced by several factors and features such as mobility, heterogeneity and distribution. So, new application will be able to adapt its services with the change of context of use and satisfy all users’ preferences. In this work we present an UML extension for representing and modeling context because UML does not support all aspects of the context of use in an adequate manner. This extension is defined by some extensibility mechanisms and it is presented as a set of new tools for the unified modeling language. The proposed extension is based on UML notation and it permits obtaining a specific graphic representation of a contextual situation. Also, it facilitates the extraction and the modeling of all elements that can influence the current situation of the user. Our proposal consists on creating some stereotypes that are described by several tagged values and some constraints and that can be applied to the contextual model classes. Then we use a class diagram to describe the different types of context and their relationships. A case study is done in the medical domain in which we propose a new contextual model including all new stereotypes by using StarUML software modeling platform.

Mohamed Salah Benselim, Hassina Seridi-Bouchelaghem
Semantic Aware Implication and Satisfiability for Conjunctive Queries in Semantic Caching

Finding satisfiability and implication results among queries is fundamental to several problems in databases especially in distributed databases. The known complexity of finding satisfiability of term S is O(|

S

|

3

). Similarly complexity of finding “Is S implies T (S→T)” is O(

S

3

+K) Where |

S

| is the number of distinct predicate attributes in S, and K is the number of predicate terms in T (S and T are conjunctive select-project-join(PSJ) queries). We show that with the induction of Cross Attribute Knowledge (C

RA

) the above complexity is reduced to O(|

S

 − 

C

RA

|

3

) and O(|

S

 − 

C

RA

|

3

+K) for satisfiability and implication respectively.

Muhammad Azeem Abbas, Muhammad Abdul Qadir, Munir Ahmad, Tariq Ali

Data and Network Mining

A Hierarchical Routing Protocols for Self-organizing Networks

Recently studies and technological developments are being actively carried out on applying sensor networks using low-cost, low-power wireless sensors to USNs. Wireless sensor networks are changing from static network environments to active networks capable of generating various environments and rapid transformations, and in such environments the ability to collect and deliver data between sensor nodes is very important. Therefore, an autonomous, efficient network must be designed to link the sensor nodes together. It suggests strained routing protocols that can efficiently self-organized in the event of changes in sensor networks due to node obstruction and environmental factors, and proved their practicality and efficiency. The research also measured energy consumption per node and showed reduction of energy use of 68.1% for GTR, 65.6% for ComHRP, and 4.4% for CTR, and a comparison of average Hops by node according to route establishment between all nodes evidenced a 21.44% efficiency rate.

Hoon Kwon, Ho-young Kwak, Sang-Joon Lee, Sung-Joon Lee
A Practical Method for Evaluating the Reliability of Telecommunication Network

The reliability of networks is defined as the probability that a system will perform its intended function without failure over a given period of time. Computing the reliability of networks is an NP-hard problem, which need efficient techniques to be evaluated. This paper presents a network reliability evaluation algorithm using Binary Decision Diagrams (BDD). The solution considers the 2-terminal reliability measure and proceeds first by enumerating the minimal paths set from which a BDD is generated. The algorithm has been implemented in Java and MatLab and experienced using a real radio telecommunication network. The results of such application have testified that the program didn’t need large memory size and big time requirement.

Mohamed-Larbi Rebaiaia, Daoud Ait-Kadi, Denis Page
Combining Classifiers for Spam Detection

Nowadays e-mail has become a fast and economical way to exchange information. However, unsolicited or junk e-mail also known as spam quickly became a major problem on the Internet and keeping users away from them becomes one of the most important research area. Indeed, spam filtering is used to prevent access to undesirable e-mails. In this paper we propose a spam detection system called “

3CA&1NB

” which uses machine learning to detect spam. “

3CA&1NB

” has the characteristic of combining three cellular automata and one naïve Bayes algorithm. We discuss how the combination learning based methods can improve detection performances. Our preliminary results show that it can detect spam effectively.

Fatiha Barigou, Naouel Barigou, Baghdad Atmani
Exploring a New Small-World Network for Real-World Applications

Emergent methods for self-organizing a new type of Small-World (SW) network with less average path-length than that obtained with conventional small-world networks are presented. One method is inspired by an Ant-Colony Optimization (ACO) algorithm, and the other is based on a weighted Monte-Carlo generation method for random graphs. The resultant network architecture common to these methods is a multi-star network, which yields a large clustering coefficient and the shortest average path-length among the conventional complex networks such as a Watts-Strogatz model and a Barabási-Albert model etc., from both a theoretical and an experimental analysis of the properties of those networks. Considering the advantageous properties of the multi-star network in real-world applications, it could be used to analyze human networks in SNS such as Twitter and Blog. Another possible application would be in the field of logistics. For example, the conventional airline network could become more efficient and convenient in the future than the current one because of fewer transits and a shorter cruising distance on average from any starting point to any destination on Earth.

abstract

environment.

Hidefumi Sawai
Fast Algorithm for Deep Packet Inspection

Efficient Matching multiple keywords against a stream of input is still an important task. This is essential for deep packet inspection where multi-keyword matching has to be done at wire-speed. While the Aho-Corasick algorithm is one of the best use algorithm, the efficiency of the algorithm depends on the implementation of the required data structures and the goto function. In this paper, an optimized implementation of the trie of the Aho-Corasick algorithm is presented. The key idea is to use a prime number signature to reduce the number of lookups. The performance of the suggested algorithm, in space and time, is compared with different implementations in use today.

Salam Barbary, Hikmat Farhat, Khalil Challita
1+N Orthogonal Encoding with Multiple Failure Tolerance (1+N OEMFT)

This paper proposes a mechanism for protecting data networks called 1 + N OEMFT, capable of protecting a connection set against a multiple failures scenario on the network links. The mechanism used guarantees the protection of a connection, provided the destination is not completely isolated from the network for failures occurred. The protection scheme uses the concept of 1 + N protection proposed in [16], [10] and [19], but unlike the codification of the data for each connection is made without the use of the XOR operation, instead used a matrix of orthogonal vectors, then a cluster in a single message. Additionally, it eliminates the convergence time imposed in [16], [10] and [19], allowing it recovered in the time that the target detects the fault.

José-Alejandro Niño-Mora, Yezid Donoso
The BASRAH System: A Method for Spoken Broadcast News Story Clustering

In the current study, the BASRAH system was used to calculate confidence measures (CMs) and then use them to designate individual words provided by an automatic speech recognition system (ASR) as either accept or reject. This information about a recognized word can be used to reduce the impact of ASR transcription errors on retrieval performance. The system also can process multilingual broadcasts, which is more challenging than dealing with a single language. The BASRAH system is able to provide CMs for ASR output for large data sets based on a word acoustic score. In a case study, we successfully used the BASRAH system to first calculate CMs to clean up spoken multilingual (English and Malay) broadcast news transcription and then to identify the boundaries of the broadcast news stories.

Zainab A. Khalaf Aleqili
Unsupervised Clustering Approach for Network Anomaly Detection

This paper describes the advantages of using the anomaly detection approach over the misuse detection technique in detecting unknown network intrusions or attacks. It also investigates the performance of various clustering algorithms when applied to anomaly detection. Five different clustering algorithms: k-Means, improved k-Means, k-Medoids, EM clustering and distance-based outlier detection algorithms are used. Our experiment shows that misuse detection techniques, which implemented four different classifiers (naïve Bayes, rule induction, decision tree and nearest neighbour) failed to detect network traffic, which contained a large number of unknown intrusions; where the highest accuracy was only 63.97% and the lowest false positive rate was 17.90%. On the other hand, the anomaly detection module showed promising results where the distance-based outlier detection algorithm outperformed other algorithms with an accuracy of 80.15%. The accuracy for EM clustering was 78.06%, for k-Medoids it was 76.71%, for improved k-Means it was 65.40% and for k-Means it was 57.81%. Unfortunately, our anomaly detection module produces high false positive rate (more than 20%) for all four clustering algorithms. Therefore, our future work will be more focus in reducing the false positive rate and improving the accuracy using more advance machine learning techniques.

Iwan Syarif, Adam Prugel-Bennett, Gary Wills
BNITE: Bayesian Networks-Based Intelligent Traffic Engineering for Energy-Aware NGN

Network Management Systems (NMS) are used to monitor the network and maintain its performance with a prime focus on guaranteeing sustained QoS to the services. However, another aspect that must be given due importance is the energy consumption of the network elements, specially during the off-peak periods. This paper proposes and implements a novel idea of energy-aware network management that looks at a scenario where the NMS plays an important role in making the network energy efficient by predictively turning the network elements to sleep mode when they are underutilized. To this end, it designs and evaluates a Bayesian Networks (BN) based Intelligent Traffic Engineering (BNITE) solution, which provides intelligent decisions to the NMS for it to adaptively alter the operational modes of the network elements, with minimum compromise in the network performance and QoS guarantees. Energy-aware Traffic Engineering algorithms are developed for both stand-alone (single router) and centralised (multiple routers) scenarios to prove the concept. Simulated network experiments using NCTUns and Hugin Researcher have been used to demonstrate the feasibility and practicality of the proposed solution. Significant energy savings with minimal degradation in QoS metrics demonstrate the benefits of BNITE solution for real-world networks such as the NGN.

Abul Bashar

Grid and Cloud Computing

Advance Planning and Reservation in a Grid System

Advance Planning and Reservation in a Grid System allows applications to request resources from multiple scheduling systems at a specific time in future and thus gain simultaneous access to sufficient resources for their execution. Existing advance reservation strategy will reject incoming reservation if requested resources are not available at that exact time. Therefore impact of advance reservations is decreasing resource utilization due to fragmentations. This paper proposes a novel advance planning and reservation strategy namely First Come First Serve Ejecting Based Dynamic Scheduling (FCFS-EDS) to increase resources utilization in a grid system. To achieve this we introduce a new notion that maps a user job to a virtual compute nodes (called logical view) which are subsequently mapped to actual compute nodes (called physical view) at the time of execution. A lemma ensures the success of such a mapping with increased resource utilization.

Rusydi Umar, Arun Agarwal, C. R. Rao
A Memoryless Trust Computing Mechanism for Cloud Computing

Trust management systems play an important role in identifying the quality of service in distributed systems. In cloud computing, trust systems can be used to identify service providers who would meet the requirements of customers. Several trust computing mechanisms have been proposed in literature based on various trust metrics. Most of these systems compute the trust scores incrementally from the previous values. This is major vulnerability that can be exploited by adversaries to attack the system forcing the trust scores towards extreme values. In this paper, the authors present a memoryless trust computing mechanism which is immune to such attacks. The proposed mechanism does not depend on the previous trust scores hence it cannot be forced towards extreme values by repeated requests. The simulations experiments conducted show that the trust scores computed using the proposed mechanism are more representative and stable in the face of attacks compared to other systems.

Mohamed Firdhous, Osman Ghazali, Suhaidi Hassan
Cost-Aware Performance Modeling of Multi-tier Web Applications in the Cloud

Typical web applications employ a multi-tier architecture. Traditionally, a pool of physical servers is used to host web applications. To handle the dynamic workloads which characterize today’s web applications, several authors have proposed schemes for dynamic resource provisioning. Such schemes add more servers during peak loads and remove servers during other times. Advances in cloud computing technologies have created new perspectives for real-time dynamic provisioning. The elastic nature of cloud computing systems allows system administrators to quickly scale resources to respond to unexpected load changes. In such systems, dynamic provisioning is not only concerned with meeting Service Level agreements, but also must take into account monetary costs. In this paper, we exploit performance modeling in the context of cloud computing (Amazon EC2). Having such performance models enables understanding the trade-off between performance and cost, a cornerstone in developing dynamic provisioning performance management schemes.

Issam Al-Azzoni, Derrick Kondo
Cyber Security: Vulnerabilities and Solutions for Telemedicine over Cloud Architectures

The acceptance of internet as a ubiquitous mechanism for ICT activities has revolutionized e-applications. Cloud computing is a new discipline to exploit the virtual environment over the internet for applications. Cloud service providers offer large infrastructure, computing power and software services configurable by the client at low upfront cost. But the vulnerability of cyber domain has led to serious security concerns of the clients regarding services and data management. Cyber security mechanisms have evolved in the past two decades to handle these problems at data transport and operation levels. To enhance user confidence in cloud computing, end to end security, data integrity, proof of retrievablity (POR), third party audit (TPA) along with forensic methods have been proposed. The security solutions offered by trusted platform group (TPG) can be used for cloud computing through integration with trusted platform modules (TPM) embedded in the client and server machines. The scheme addresses the physical security concerns posed by the portable data access units like mobiles, notebooks, iPADs and laptops. In this paper we proposed a cyber security solution for telemedicine over the cloud architectures based on available resources.

Shaftab Ahmed, Azween Abdullah, M. Yasin Akhtar Raja
High-Level Abstraction Layers for Development and Deployment of Cloud Services

Cloud computing has emerged in recent years, while the illusion of the unlimited resources and feature-rich, is still considerably complex to use. Not only the migration between clouds is difficult - but also the development and deployment of a new, complex service from the beginning could be quite a challenge using today’s cloud computing. In the paper, new high-level abstraction layers for cloud computing is presented. The abstraction layers will allow users to manage cloud resources form various clouds and simplify the process of developing and deploying services into those clouds. This approach will also solve absolutely the interoperability issue, thus improving the flexibility of cloud computing.

Binh Minh Nguyen, Viet Tran, Ladislav Hluchy
TPC-H Benchmark Analytics Scenarios and Performances on Hadoop Data Clouds

NoSQL systems rose alongside internet companies, which have different challenges in dealing with data that the traditional RDBMS solutions could not cope with. Indeed, in order to handle the continuous growth of data, NoSQL alternatives feature dynamic horizontal scaling rather than vertical scaling. To date few studies address OLAP benchmarking of NoSQL systems. This paper overviews NoSQL and adjacent technologies, and evaluates Hadoop/Pig using TPC-H benchmark, through two different scenarios of clouds. The first scenario assumes that data is saved on a data cloud and business questions are routed to the cloud for processing; while the second scenario assumes pre-summarized data calculus in a first step and multidimensional analysis in a second step. Finally, the paper reports thorough performance tests on Hadoop for various data volumes, workloads, and cluster’ sizes.

Rim Moussa

Information and Data Management

An Incremental Correction Algorithm for XML Documents and Single Type Tree Grammars

XML documents represent an integral part of the contemporary Web. Unfortunately, a relatively high number of them is affected by well-formedness errors, structural invalidity or data inconsistencies. The purpose of this paper is to continue with our previous work on a correction model for invalid XML documents with respect to schemata in DTD and XML Schema languages. Contrary to other existing approaches, our model ensures that we are always able to find all minimal repairs. The contribution of this paper is the description and experimental evaluation of our new incremental algorithm, which is able to efficiently follow only perspective correction ways even to the depth of the recursion.

Martin Svoboda, Irena Mlýnková
Distributed RFID Shopping System

RFID technology has recently made significant advancements in the domain of retail sales. This paper presents a novel approach to the use of RFID technology in this field. Despite the current system architectures in RFID systems used in similar research projects, a distributed architecture and a suitable design are proposed. Users will be able to scan their purchased products by putting them in the shopping cart, view their current bill on the cart’s touchscreen, and get directions in the shopping area. The motivation behind this different approach is to give customers more flexibility and control over their shopping cart. It will enable them to benefit from information about the products and aisles of the shopping space. Additionally, it will also enable the storage of customer transactions and location data to render it available for data mining purposes.

Amine Karmouche, Yassine Salih Alj
Efficient Detection of XML Integrity Constraints Violation

Knowledge of integrity constraints (ICs) covered in XML data is an important aspect of efficient data processing. However, although ICs are defined, it is a common phenomenon that the respective data violate them. Therefore detection of these inconsistencies and consecutive repair has emerged. This paper extends and refines recent approaches to repairing XML documents violating defined set of ICs, specifically so-called functional dependencies. The work proposes a repair algorithm incorporating a weight model and a user interaction into the process of detection and subsequent application of appropriate repair of inconsistent XML documents. Experimental results are included.

Michal Švirec, Irena Mlýnková
Encoding Spectral Parameters Using Cache Codebook

A new efficient approach to quantize the spectral line frequencies (LSF) in a coder is proposed. The use of the full search algorithm in the spectral parameters quantization causes high complexity and large hardware storage. Attempts to reduce the complexity have been performed by lowering the size of the LSF codebook. This option leads to a sub-optimal solution; the number of LSF vectors to be tested affects the performance of the speech coder. Cache codebook (CCB) technique enhances the search of the optimal quantized spectral information. In this technique the size of the main codebook is kept unchanged while the number of closest match searches is reduced. Unlike the classical quantizer design, the CCB method involves one main codebook embedding four disjoint sub-codebooks. The content of the CCB at any time is an exact reproduction of one of the four sub-codebooks. The search for the best match to an input vector is limited to the LSF vectors of the CCB. Some criteria are used to accept or reject this closest match. The CCB is updated whenever the decision is in favor of rejection. The cache codebook was successfully embedded in a CELP coder to enhance the quantization of the spectral information. The comparison simulation results show that the Codebook Caching approach yields to comparable objective and subjective performance to that of the optimal full-search technique when using the same training and testing database.

Driss Guerchi, Siwar Rekik
Incrementally Optimized Decision Tree for Mining Imperfect Data Streams

The Very Fast Decision Tree (VFDT) is one of the most important classification algorithms for real-time data stream mining. However, imperfections in data streams, such as noise and imbalanced class distribution, do exist in real world applications and they jeopardize the performance of VFDT. Traditional sampling techniques and post-pruning may be impractical for a non-stopping data stream. To deal with the adverse effects of imperfect data streams, we have invented an incremental optimization model that can be integrated into the decision tree model for data stream classification. It is called the Incrementally Optimized Very Fast Decision Tree (I-OVFDT) and it balances performance (in relation to prediction accuracy, tree size and learning time) and diminishes error and tree size dynamically. Furthermore, two new Functional Tree Leaf strategies are extended for I-OVFDT that result in superior performance compared to VFDT and its variant algorithms. Our new model works especially well for imperfect data streams. I-OVFDT is an anytime algorithm that can be integrated into those existing VFDT-extended algorithms based on Hoeffding bound in node splitting. The experimental results show that I-OVFDT has higher accuracy and more compact tree size than other existing data stream classification methods.

Hang Yang, Simon Fong
Knowledge Representation Using LSA and DRT Rules for Semantic Search of Documents

Search engines are supposed to return all relevant documents to users. The existing search engines read the text as a sequence of words without meaning. The search is then limited to find documents that contain the same words than the user‘s query that reduces the relevance in returned documents.

In this work, we propose a semantic search engine that represents user‘s information need by a pattern of relevance.

Our approach is characterized by the use of Latent Semantic Analysis (LSA) to find the words semantically correlated, semantic links are then created by using Discourse Representation Theory (DRT) that offers the possibility to translate a sentence from natural language to logical representation. The found links are saved in an ontology.

Information retrieval is executed by comparing the knowledge extracted from each document to the relevance pattern of user’s request. This allows a better identification of information’s needs, and restrict the set of documents returned.

Sofiane Allioua, Zizette Boufaida
Leaker Identification in Multicast Communication

Multicast is a communication mode in which data is exchanged among multiple end systems. An important concern in multicast communications is the protection of copyrights. While preventing copyright violations seems very difficult, tracing copyright violators (leaker identification) is more feasible and can be used as a deterrence alternative. In order to identify leakers in a multicast environment, every receiver should obtain a uniquely marked copy of the data. However, delivering unique copies from the sender to the receivers is inefficient. This paper investigates multicast leaker identification and introduces an efficient solution that is based on binary search tree. We introduce the notion of suspicious set that includes suspected end systems. When a leak is detected, all receivers are inserted in the suspicious set. Using a binary search, the suspicious set is refined successively until it includes only the leakers. An analytical study is conducted to evaluate the proposed solution.

Emad Eldin Mohamed, Driss Guerchi
Ratio-Based Gradual Aggregation of Data

Majority of databases contain large amounts of data, gathered over long intervals of time. In most cases, the data is aggregated so that it can be used for analysis and reporting purposes. The other reason of data aggregation is to reduce data volume in order to avoid over-sized databases that may cause data management and data storage issues. However, non-flexible and ineffective means of data aggregation not only reduce performance of database queries but also lead to erroneous reporting. This paper presents flexible and effective ratio-based methods for gradual data aggregation in databases. Gradual data aggregation is a process that reduces data volume by converting the detailed data into multiple levels of summarized data as the data gets older. This paper also describes implementation strategies of the proposed methods based on standard database technology.

Nadeem Iftikhar

Intelligent Agent-Based systems

An Adaptive Arbitration Algorithm for Fair Bandwidth Allocation, Low Latency and Maximum CPU Utilization

Utilization of adaptive algorithm for fair bandwidth allocation, low latency and maximum CPU utilization is proved to be a promising approach for designing system-on-chip for future applications. Adaptive arbitration is more advantageous then the other conventional arbitration algorithms for several reasons; these include fair bandwidth allocation among different masters, simple design and low cost over head. This article provides a comprehensive picture of research and developments in dynamic arbitration algorithm for masters according to the different traffic behavior. The papers published in standard journals are reviewed, classified according to their objectives and presented with a general conclusion.

M. Nishat Akhtar, Othman Sidek
Aggressive and Intelligent Self-Defensive Network
Towards a New Generation of Semi-autonomous Networks

Aggressive and Intelligent Self-defensive Network (AISEN) is an open-source distributed solution that aims at deploying a semi-autonomous network, which enables internal attack deception through misguidance and illusion. In fact, instead of simply preventing or stopping the attack as do traditional Intrusion Prevention Systems (IPS), AISEN drives attackers to attack decoy machines, which clone victim machines by mimicking their personalities (e.g. OS, services running). On top of that, AISEN uses rogue machines that clone idle production machines, which are able to detect human-aware zero-day attacks not seen by IPS. The solution uses real-time dynamic high-interaction honeypot generation, and a novel rerouting schema that is both router and network architecture independent, along with a robust troubleshooting algorithm for sophisticated attacks. Information captured and data gathered from these decoy machines will give CERTs/CISRTs and forensic experts critical data relevant to the sophistication of the attack, vulnerabilities targeted, and some means of preventing it in the future. This project reviewed former designs and similar studies addressing the same issues and emphasizes the added value of this open source solution in terms of flexibility, ease of use and upgrade, deployment, and customization.

Because AISEN seamlessly integrates with Security Information and Event Management (SIEM) software, it goes far beyond standard IPS/IDS alerts. It actually listens for suspicious activities and uncommon behavior (e.g. port scanning in a communication department network) to detect suspicious activities that a normal user would not do. AISEN is designed to enable potential integration with passive Strike-back modules that may be achieved in later work.

Ali Elouafiq, Ayoub Khobalatte, Wassim Benhallam
A 2-Dimensional Cellular Automata Pseudorandom Number Generator with Non-linear Neighborhood Relationship

Until recently, two-dimensional (2-D) cellular automata (CA) pseudorandom number generator (PRNG) research areas have been done based on von Neumann with linear neighborhood relationship. Although the linear neighborhood relationship has an excellent random quality, its cycle length is less than the linear neighborhood relationship. The cycle length is an important w.r.t. cryptographically secure PRNG because of the property of non-prediction for next sequence.

This paper proposes 2-D CA PRNG based on von Neumann method with non-linear neighborhood relationship. In the proposed scheme, five elements (i.e.

self

,

top

,

bottom

,

left

and

right

) and two control elements (i.e.

c

1

and

c

2

) with the combination of Boolean operator AND, XOR, or OR are used. The evolution function chooses one combination of XOR & AND and XOR & OR by two control elements. The number of rules in the proposed scheme is higher than previous schemes. To evaluate between the proposed scheme and previous schemes, the ENT and DIEHARD test suites are used in the experiments. In the experimental result, the randomness quality of the proposed PRNG was slightly less than or much the same previous schemes. However, the proposed scheme can generate various CA rule patterns and the number of rules is higher than previous schemes. The correlation coefficient between global state

G

(

t

)

and

G

(

t

 + 1)

of the proposed scheme is reduced because of using the non-linear neighborhood relationship.

Sang-Ho Shin, Dae-Soo Kim, Kee-Young Yoo
FURG Smart Games: A Proposal for an Environment to Game Development with Software Reuse and Artificial Intelligence

This paper presents a proposal for an environment to game development that uses software reuse and artificial intelligence. This environment is composed by a framework that implements the State project pattern, an edition tool and generation source code to object oriented basing to finite state machine. The main goal of this environment is to facilitate the implementation of the making-decision layer to NPC (Non-Player Characters) in the games.

Carlos Alberto B. C. W. Madsen, Giancarlo Lucca, Guilherme B. Daniel, Diana F. Adamatti
Scalable Content-Based Classification and Retrieval Framework for Dynamic Commercial Image Databases

Large-scale commercial image databases are getting increasingly common and popular, and nowadays several services over them are being offered via Internet. They are truly dynamic in nature where new image(s), categories and visual descriptors can be introduced in any time. In order to address this need, in this paper, we propose a scalable content- based classification and retrieval framework using a novel collective network of (evolutionary) binary classifier (CNBC) system to achieve high classification and content-based retrieval performances over commercial image repositories. The proposed CNBC framework is designed to cope up with incomplete training (ground truth) data and/or low-level features extracted in a dynamically varying image database and thus the system can be evolved incrementally to adapt the change immediately. Such a self-adaptation is achieved by basically adopting a “Divide and Conquer” type approach by allocating an individual network of binary classifiers (NBCs) to discriminate each image category and performing

evolutionary

search to find the optimal binary classifier (BC) in each NBC. Furthermore, by means of this approach, a large set of low-level visual features can be effectively used within CNBC, which in turn selects and combines them so as to achieve highest discrimination among each individual class. Experiments demonstrate a high classification accuracy and efficiency of the proposed framework over a large and dynamic commercial database using only low-level visual features.

Serkan Kiranyaz, Turker Ince, Moncef Gabbouj

Internet Modeling and Design

A Requirement Aware Method to Reduce Complexity in Selecting and Composing Functional-Block-Based Protocol Graphs

Future Internet research activities try to increase the flexibility of the Internet. A well known approach is to build protocol graphs by connecting functional blocks together. The protocol graph that should be used is the one most suitable to the application’s requirements. To find the most suitable graph, all possible protocol graphs must be evaluated. However, the number of possible protocol graphs increases exponentially as the number of functional blocks increases. This paper presents a method of representing the protocol graph search space as a set of search trees and then uses forward pruning to reduce the number of protocol graphs evaluated. We evaluate our proposed method by simulation.

Daniel Günther, Nathan Kerr, Paul Müller
Modeling and Analyzing MAC Frame Aggregation Techniques in 802.11n Using Bi-dimensional Markovian Model

Increased expectations and demand for higher rates led to the development of new physical layer technologies in Wireless LANs. However, the current medium access control (MAC) needs to be improved to fully utilize higher physical-layer transmission rates. Several aggregation mechanisms have been recently proposed to improve the MAC layer performance of 802.11n. In this paper, we analyze some of the key aggregation mechanisms proposed. For analysis we adapted widely used Bianchi’s analytical model and applied it for various aggregation techniques. We also compare the analytical details of various strategies and provide a unified analytical framework for continued research in this direction.

Nazeeruddin Mohammad, Shahabuddin Muhammad
Towards a Successful Mobile Map Service: An Empirical Examination of Technology Acceptance Model

The present study conducted structural equation modeling (SEM) analyses on data collected from 1,011 participants in order to examine the role of Technology Acceptance Model (TAM) in predicting mobile map service users’ attitudes toward the service. Results from the analyses indicated that perceived mobility and perceived locational accuracy significantly influenced user acceptance and intention to use mobile map services via portable computing devices. In particular, increase in perceived mobility positively affected perceived usefulness while perceived locational accuracy also positively affected perceived usefulness and ease of use of using mobile map services. In addition, the present study revealed stronger effects of attitude on the behavioral intention to use than perceived usefulness, while the effects of perceived usefulness was stronger on attitude than perceived ease of use. Implications of notable findings and limitations are discussed.

Eunil Park, Ki Joon Kim, Dallae Jin, Angel P. del Pobil
VND-CS: A Variable Neighborhood Descent Algorithm for Core Selection Problem in Multicast Routing Protocol

Core Selection CS problem consists in choosing an optimal multicast router in the network as the root of the shared path multicast tree (SPT). The choice of this designated router (refer to as the “Rendezvous Point RP” in PIM-SM protocol and the “core” in CBT protocol) is the main problem concerning multicast tree construction; this choice influences multicast routing tree structure, and therefore influences performances of both multicast session and routing scheme. The determination of a best position of the Rendezvous Point is an NP complete problem: it needs to be solved with a heuristic algorithm. In this paper we propose a new Core Selection CS algorithm based on Variable Neighborhood Descent algorithm, based on a systematic neighborhood changing. VND-CS algorithm selects the core router by considering both cost and delay functions. Simulation results show that good performance is achieved in terms of multicast cost, end-to-end delay and tree construction delay.

Youssef Baddi, Mohamed Dafir Ech-Cherif El Kettani

Mobile, Ad Hoc and Sensor NetworkManagement

An Efficient Algorithm for Enumerating Minimal PathSets in Communication Networks

The reliability of complex networks is a very sensitive issue which requires implementing powerful methods for its evaluation. Many algorithms have been proposed to solve networks reliability problem as those based on minimal pathsets and cutsets approximation. The enumeration of minimal pathsets can be obtained very easily, it just needs to use an ordinary algorithm to determine the paths/cuts, but in case the network size is large, more efficient algorithms are needed. This paper presents an intuitive algorithm to find all minimal paths. The algorithm proceeds recursively using an efficient procedure traversing cleverly the structure of a graph. Its complexity has been checked to be better than those developed until now. The program is simple, compact, modular and easy to be embedded to any software which evaluates the reliability as those based on sum-of-disjoint product approach. During our experiment tests, we have enumerated several networks of varied complexities and the comparison with demonstrated literature approaches is systematic.

Mohamed-Larbi Rebaiaia, Daoud Ait-Kadi
An Efficient Emergency Message Dissemination Protocol in a Vehicular Ad Hoc Network

Dissemination of emergency messages is a critical research area which aims to avoid traffic fatalities. Due to inherent characteristics of vehicular ad hoc networks and the emergency of the messages, most of developed applications are based on the broadcasting messages. The most basic strategy is called flooding. This strategy is well studied in the context of mobile ad hoc networks and has been shown it causes contentions, collisions and redundancy, well known as the ‘broadcast storm problem’. The majority of existing solutions focuses on decreasing the number of relay nodes and discards the use of unicast messages. In this paper we study a new strategy that combines the use of unicast and broadcast modes at the same time. Simulation results show that the proposed protocol achieves low latency in delivering emergency warnings in spite of the use of unicast messages and these results can be enhanced by modifying the MAC layer protocol parameters.

Zouina Doukha, Samira Moussaoui, Noureddine Haouari, Mohamed E. A. Delhoum
An Energy Saving Data Dissemination Protocol for Wireless Sensor Networks

Recently, Mobile Agents have been used for efficient data dissemination in wireless sensor networks. In the traditional Client/Server architectures, data sources are transferred to a destination while in Mobile Agents architectures, a specific executable code passes through relevant sources to collect data. Mobile Agents can be used to significantly reduce the cost of communication, especially through low bandwidth, by moving the processing function to the data, rather that bringing the data to a central processor. This work proposes to use a Client/Server approach using Mobile Agents to aggregate data in a planar architecture of wireless sensor network.

Dalila Iabbassen, Samira Moussaoui
An Ω-Based Leader Election Algorithm for Mobile Ad Hoc Networks

Leader election is a fundamental control problem in both wired and wireless systems. The classical statement of the leader election problem in distributed systems is to eventually elect a unique leader from a fixed set of nodes. However, in MANETS, many complications may arise due to frequent and unpredictable topological changes. This paper presents a leader election algorithm based on the omega failures detector, where inter-node communication is allowed only among the neighboring nodes along with proofs of correctness. This algorithm ensures that every connected component of the mobile ad hoc network will eventually elect a unique leader, which is the node of the highest priority value from among all nodes within that connected component. The algorithm can tolerate intermittent failures, such as link failures, network partitions, and merging of connected network components.

Leila Melit, Nadjib Badache
Cartography Gathering Driven by the OLSR Protocol

Nowadays, the location awareness becomes a ubiquitous requirement for several computing applications. The utilization of such knowledge in mobile ad hoc networks is primarily limited by the scarcity of their resources. In this paper, we propose two cartography gathering schemes (OLSR-SCGS and OLSR-ACGS) that make use of the seminal OLSR signaling in order to endue nodes participating in the MANET with location awareness. The first proposed scheme (OLSR-SCGS) is inspired from the operation of OLSR during the process of routing table calculation. The simplicity of this scheme makes it subject to several limitations that we avoid in the second scheme (OLSR-ACGS). In this latter scheme, the nodes become able to identify the freshest position among the available ones thanks to a dedicated time stamping approach. Conducted simulations show the effectiveness of the two proposed approaches. Nevertheless, simulations portray the superiority of the second proposed scheme (OLSR-ACGS) regarding the first one (OLSR-SCGS) when the network dynamics increases. Through simulations, we show that the validity of the collected cartography is impacted by several factors such as the tolerated distance parameter, the frequencies of OLSR control messages and the network extent.

Mohamed Belhassen, Abdelfettah Belghith
Distributed Self-organized Trust Management for Mobile Ad Hoc Networks

Trust is a concept from the Social Sciences and can be defined as how much a node is willing to take the risk of trusting another one. The correct evaluation of the trust is crucial for several security mechanisms for Mobile Ad Hoc Networks (MANETs). However, the implementation of an effective trust evaluation scheme is very difficult in such networks, due to their dynamic characteristics. This work presents a trust evaluation scheme for MANETs based on a self-organized virtual trust network. To estimate the trustworthiness of other nodes, nodes form trust chains based on behavior evidences maintained within the trust network. Nodes periodically exchange their trust networks with the neighbors, providing an efficient method to disseminate trust information across the network. The scheme is fully distributed and self-organized, not requiring any trusted third party. Simulation results show that the scheme is very efficient on gathering evidences to build the trust networks. It also shows that the scheme has a very small communication and memory overhead. Besides, it is the first trust evaluation scheme evaluated under bad mouthing and newcomers attacks and it maintains its effectiveness in such scenarios.

Mehran Misaghi, Eduardo da Silva, Luiz Carlos P. Albini
Distributed Sensor Relay System for Near Real Time Observation, Control and Data Management on a Scientific Research Ship

Adding intelligence to deployed instruments in an oceanographic environment of restricted bandwidth helps to improve service quality and enables autonomous observation and data management. Delay tolerance and remote access often pose challenges to providing near real time observation. We have experimented with a distributed hybrid web enabled sensor system and conducted its deployment in a scientific oceanographic cruise. The purpose of the experiment was to study the feasibility and performance of narrow-band network relay communication in an oceanographic environment to assist near real time observation and sensor control. The restriction of resources, in particular the unreliable Internet satellite connection and lack of bandwidth prevent a centralized real-time system from working properly. Bandwidth tests were conducted and a delay tolerant networked relay has been introduced by de-coupling the functions to make it like a distributed system. Multiple nodes are setup across the ship, cloud Internet, and laboratory ashore to form a loosely coupled and balanced networked system. The system also aims to form a foundation platform for integrating higher level services to support oceanographic observation, data management with interoperability, such as OGC SWE services and IEEE 1451 smart sensor standards.

Barry Tao, Jon Campbell, Gwyn Griffiths
Energy Efficiency Mechanisms Using Mobile Node in Wireless Sensor Networks

A traditional network consists of gateway sensors which transmit data to the base stations. These nodes are considered bottlenecks in multihop-networks as they transmit their data as well as data from other nodes and hence they deplete faster in energy. One way to optimize energy efficiency in a WSN is to deploy a mobile base station which could collect data without a need for gateway nodes, and hence the multihop bottleneck would be minimized. We compare these two variations of WSN, one consisting of the multihop approach with gateway nodes, and we propose the other network structure, whereby a mobile base station collects data individually from each node using double Fermat’s spiral model.

Teddy Mantoro, Media A. Ayu, Haroon Shoukat Ali, Wendi Usino, Mohammed M. Kadhum
Hovering Information Based VANET Applications

In this paper, we propose using Hovering Information in vehicular applications to enhance their ability to deal with the dynamics of environment. In particular, we have designed and developed architecture for vehicular applications that employs Hovering Information, and described how hovering information can be utilized in decision-making module made using fuzzy logic. The system based on Hovering Information takes advantages of both hovering and local information published by other nodes and collected from local sensors respectively. The use of Hovering Information allows obtaining dynamically updated information from surrounding nodes during travel and allows information sharing within an ad-hoc network without network infrastructure. Finally, a simulation of decision-making module based on fuzzy logic that uses hovering information as an input of fuzzy controller, has been presented to demonstrate importance of hovering information in real time applications.

Muhammad Shoaib, Wang-Cheol Song
Improving the Lifetime of Wireless Sensor Networks Based on Routing Power Factors

In this paper the power efficiency of Wireless Sensor Networks (WSNs) Power is studied. Power consumption is the main challenge during the routing of data. The objective of this study is to address three routing parameters that affect the network life time such as initial power of the nodes, the residual power in the nodes and routing period. Simulations of the effects of these parameters are presented in different network sizes. Contribution on how to improve network lifetime through consideration of power consumption factors during routing is added.

Abdullah Said Alkalbani, Teddy Mantoro, Abu Osman Md Tap
Redirect Link Failure Protocol Based on Dynamic Source Routing for MANET

Ad hoc networks are characterized by multi-hop wireless connectivity. Due to frequent topology changes in wireless ad hoc networks, designing an efficient and dynamic routing protocol is a very challenging task,and it has been an active area of research. This work is part of ongoing research on the link failure problem caused by the node’s mobility in Dynamic Source Routing (DSR) protocol. In this paper, we propose an extension of DSR protocol called Redirect Link Failure Protocol (DSR-RLFP) in order to solve link failure and avoid propagation route error message back to the source upon link failure in a different way from the existing solution. RLFP protocol contains two models namely, Link Failure Prediction Model and Link Failure Solution Model. The main benefit of RLFP is to reduce the frequent route discovery process after link failure happened, which leads to save network resources.

Naseer Ali Husieen, Osman Ghazali, Suhaidi Hassan, Mohammed M. Kadhum
Resource-Aware Distributed Clustering of Drifting Sensor Data Streams

Collecting data from sensor nodes is the ultimate goal of Wireless Sensor Networks. This is performed by transmitting the sensed measurements to some data collecting station. In sensor nodes, radio communication is the dominating consumer of the energy resources which are usually limited. Summarizing the sensed data internally on sensor nodes and sending only the summaries will considerably save energy. Clustering is an established data mining technique for grouping objects based on similarity. For sensor networks,

k

-center clustering aims at grouping sensor measurements in

k

groups, each contains similar measurements. In this paper we propose a novel resource-aware

k

-center clustering algorithm called:

SenClu

. Our algorithm immediately detects new trends in the drifting sensor data stream and follows them.

SenClu

powerfully uses a light-weighted decaying technique that gives lower influence to old data. As sensor data are usually noisy, our algorithm is also outlier-aware. In thorough experiments on drifting synthetic and real world data sets, we show that

SenClu

outperforms two state-of-the-art algorithms by producing higher clustering quality and following trends in the stream, while consuming nearly the same amount of energy.

Marwan Hassani, Thomas Seidl
Robust Wireless Sensor Networks with Compressed Sensing Theory

Wireless Sensor Networks (WSNs) consist of a large number of Wireless Nodes (WNs) each with sensing, processing, communication and power supply units to monitor the real-world environment information. The WSNs are responsible to sense, collect, process and transmit information such as pressure, temperature, position, flow, vibration, force, humidity, pollutants and biomedical signals like heart-rate and blood pressure. The ideal WSNs are networked to consume very limited power and are capable of fast data acquisition. The problems associated with WSNs are limited processing capability, low storage capacity, limited energy and global traffic. Also, WSNs have a finite life dependent upon initial power supply capacity and duty cycle. The WSNs are usually driven by a battery. Therefore, the primary limiting factor for the lifetime of a WN is the power supply. That is why; each WN must be designed to manage its local power supply of energy in order to maximize total network lifetime [5]. The life expectancy of a WSN for a given battery capacity can be enhanced by minimizing power consumption during the operation of the network. The CS theory solves the aforementioned problem by reducing the sampling rate throughout the network. A combination of CS theory to WSNs is the optimal solution for achieving the networks with low-sampling rate and low-power consumption. Our simulation results show that sampling rate can reduce to 30% and power consumption to 40% without sacrificing performances by employing the CS theory to WSNs. This paper presents a novel sampling approach using compressive sensing methods to WSNs. First, an overview of compressed sensing is presented. Second, CS in WSNs is investigated. Third, the simulation results on the sampling rate in WSNs are shown.

Mohammadreza Balouchestani, Kaamran Raahemifar, Sridhar Krishnan
RTIC: Reputation and Trust Evaluation Based on Fuzzy LogIC System for Secure Routing in Mobile Ad Hoc Networks

Ad hoc networks are vulnerable to many types of attacks. Their success will undoubtedly depend on the trust they bring to their users. To enhance the security of MANETs, it is important to rate the trustworthiness of other nodes without central authorities to build up a trust environment. This paper expands on relevant fuzzy logic concepts to propose an approach to establish quantifiable trust levels between the nodes of Ad hoc networks. The proposed solution defines additional operators to fuzzy logic in order to find an end to end route which is free of malicious nodes with collaborative effort from the neighbors. In our scheme (RTIC) the path with more trusted decision value is selected as a secure route from source to destination. In order to prove the applicability of the proposed solution, we demonstrate the performance of our model through NS-2

s

imulations.

Abdesselem Beghriche, Azeddine Bilami
Sink Mobile for Efficient Data Dissemination in Wireless Sensor Networks

Among the challenges posed by the problem of data dissemination in wireless sensor networks one that has recently received considerable attention concerns the minimization of the node energy consumption for increasing the overall network lifetime. The sensor devices are battery powered thus energy is the most precious resource of a wireless sensor network since periodically replacing the battery of the nodes in large scale deployments is infeasible. Disseminate the collected data peer to peer to a static control point consumes significant amounts of energy especially for the sensor nodes around to the static sink. In this paper, we present new sink mobility-based data dissemination protocol to minimize energy consumption. We define new sink mobility scheme in which sensor sink periodically moves towards any destination based on the data dissemination frequency calculated during the last period. The simulation result shows that the proposal protocol permits to reduce the energy consumption and prolong the network life.

M. Guerroumi, Nadjib Badache, S. Moussaoui
Backmatter
Metadaten
Titel
Networked Digital Technologies
herausgegeben von
Rachid Benlamri
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-30507-8
Print ISBN
978-3-642-30506-1
DOI
https://doi.org/10.1007/978-3-642-30507-8

Premium Partner