Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 5th IFIP TC 5 International Conference on Computer Science and Its Applications, CIIA 2015, held in Saida, Algeria, in May 2015. The 56 revised papers presented were carefully reviewed and selected from 225 submissions. The papers are organized in the following four research tracks: computational intelligence; security and network technology; information technology; and software engineering.

Inhaltsverzeichnis

Frontmatter

Computational Intelligence–Meta-Heuristics

Frontmatter

Binary Bat Algorithm: On The Efficiency of Mapping Functions When Handling Binary Problems Using Continuous-variable-based Metaheuristics

Global optimisation plays a critical role in today’s scientific and industrial fields. Optimisation problems are either continuous or combinatorial depending on the nature of the parameters to optimise. In the class of combinatorial problems, we find a sub-category which is the binary optimisation problems. Due to the complex nature of optimisation problems, exhaustive search-based methods are no longer a good choice. So, metaheuristics are more and more being opted in order to solve such problems. Some of them were designed originally to handle binary problems, whereas others need an adaptation to acquire this capacity. One of the principal adaptation schema is the use of a mapping function to decode real-valued solutions into binary-valued ones. The Antenna Positioning Problem (APP) is an NP-hard binary optimisation problem in cellular phone networks (2G, EDGE, GPRS, 3G, 3G

 + 

, LTE, 4G). In this paper, the efficiency of the principal mapping functions existing in the literature is investigated through the proposition of five binary variants of one of the most recent metaheuristic called the Bat Algorithm (BA). The proposed binary variants are evaluated on the APP, and have been tested on a set of well-known benchmarks and given promising results.

Zakaria Abd El Moiz Dahi, Chaker Mezioud, Amer Draa

Relative Timed Model for Coordinated Multi Agent Systems

The MAS engineering is becoming very important, it is concerned with models, methods and tools. Therefore, verifying the correctness of MAS is the next challenge. We are interested by MAS where each participating agent has its own physical clock of varying frequency, while no global clock is available or desirable. Under such circumstances models must be adapted. In this paper we attempt a novel approach to model the MAS, with a respect of two characteristics, the concurrent aspect and heterogeneity of agents (perceived as a different time rates of agents plan execution). Timed automata with action durations are used; for the circumstance it’s extended to deal with relative time rates. Its semantic is abstracted by a novel equivalence relation leading to a region automaton for decidability assessment and proof.

Said Layadi, Jean-Michel Ilie, Ilham Kitouni, Djamel-Eddine Saidouni

Computational Intelligence: Object Recognition and Authentification

Frontmatter

A Novel Technique For Human Face Recognition Using Fractal Code and Bi-dimensional Subspace

Face recognition is considered as one of the best biometric methods used for human identification and verification; this is because of its unique features that differ from one person to another, and its importance in the security field. This paper proposes an algorithm for face recognition and classification using a system based on WPD, fractal codes and two-dimensional subspace for feature extraction, and Combined Learning Vector Quantization and PNN Classifier as Neural Network approach for classification. This paper presents a new approach for extracted features and face recognition .Fractal codes which are determined by a fractal encoding method are used as feature in this system. Fractal image compression is a relatively recent technique based on the representation of an image by a contractive transform for which the fixed point is close to the original image. Each fractal code consists of five parameters such as corresponding domain coordinates for each range block. Brightness offset and an affine transformation. The proposed approach is tested on ORL and FEI face databases. Experimental results on this database demonstrated the effectiveness of the proposed approach for face recognition with high accuracy compared with previous methods.

Benouis Mohamed, Benkkadour Mohamed Kamel, Tlmesani Redwan, Senouci Mohamed

Computational Intelligence: Image Processing

Frontmatter

A New Rotation-Invariant Approach for Texture Analysis

Image processing and pattern recognition are one of the most important area of research in computer science. Recently, several studies have been made and efficient approaches have been proposed to provide efficient solutions to many real and industrial problems. Texture analysis is a fundamental field of image processing because all surfaces of objects are textured in nature. Thus, we proposed a new texture analysis method. In this paper, we proposed a novel texture analysis approach based on a recent feature extraction method called neighbor based binary pattern (NBP). The NBP method extract the local micro texture and is robust against rotation, which is a key problem in image processing. The proposed system extract two-reference NBP histograms from the texture in order to calculate a model of the texture. Finally, several models have been constructed to be able to recognize textures even after rotation. Textured images from Brodatz album database were used in the evaluation. Experimental studies have illustrated that the proposed system obtain very encouraging results robust to rotation compared to classical method.

Izem Hamouchene, Saliha Aouat

Multi-CPU/Multi-GPU Based Framework for Multimedia Processing

Image and video processing algorithms present a necessary tool for various domains related to computer vision such as medical applications, pattern recognition and real time video processing methods. The performance of these algorithms have been severely hampered by their high intensive computation since the new video standards, especially those in high definitions require more resources and memory to achieve their computations. In this paper, we propose a new framework for multimedia (single image, multiple images, multiple videos, video in real time) processing that exploits the full computing power of heterogeneous machines. This framework enables to select firstly the computing units (CPU or/and GPU) for processing, and secondly the methods to be applied depending on the type of media to process and the algorithm complexity. The framework exploits efficient scheduling strategies, and allows to reduce significantly data transfer times thanks to an efficient management of GPU memories and to the overlapping of data copies by kernels executions. Otherwise, the framework includes several GPU-based image and video primitive functions, such as silhouette extraction, corners detection, contours extraction, sparse and dense optical flow estimation. These primitives are exploited in different applications such as vertebra segmentation in X-ray and MR images, videos indexation, event detection and localization in multi-user scenarios. Experimental results have been obtained by applying the framework on different computer vision methods showing a global speedup ranging from 5 to 100, by comparison with sequential CPU implementations.

Sidi Ahmed Mahmoudi, Pierre Manneback

Full-Reference Image Quality Assessment Measure Based on Color Distortion

The purpose of this paper is to introduce a new method for image quality assessment (IQA). The method adopted here is assumed to be Full-reference measure. Color images that are corrupted with different kinds of distortions are assessed by applying a color distorted algorithm on each color component separately. This approach use especially

YIQ

color space in computation. Gradient operator was successfully introduced to compute gradient image from the luminance channel of images. In this paper, we propose an alternative technique to evaluate image quality. The main difference between the new proposed method and the gradient magnitude similarity deviation (GMSD) method is the usage of color component for the detection of distortion.

Experimental comparisons demonstrate the effectiveness of the proposed method.

Zianou Ahmed Seghir, Fella Hachouf

Computational Intelligence: Machine Learning

Frontmatter

Biomarker Discovery Based on Large-Scale Feature Selection and MapReduce

Large-scale feature selection is one of the most important fields in the big data domain that can solve real data problems, such as bioinformatics, where it is necessary to process huge amount of data. The efficiency of existing feature selection algorithms significantly downgrades, if not totally inapplicable, when data size exceeds hundreds of gigabytes, because most feature selection algorithms are designed for centralized computing architecture. For that, distributed computing techniques, such as MapReduce can be applied to handle very large data. Our approach is to scale the existing method for feature selection, Kmeans clustering and Signal to Noise Ratio (SNR) combined with optimization technique as Binary Particle Swarm Optimization (BPSO). The proposed method is divided into two stages. In the first stage, we have used parallel Kmeans on MapReduce for clustering features, and then we have applied iterative MapReduce that implement parallel SNR ranking for each cluster. After, we have selected the top ranked feature from each cluster. The top scored features from each cluster are gathered and a new feature subset is generated. In the second stage, the new feature subset is used as input to the proposed BPSO based on MapReduce which provides an optimized feature subset. The proposed method is implemented in a distributed environment, and its efficiency is illustrated through analyzing practical problems such as biomarker discovery.

Ahlam Kourid, Mohamed Batouche

Social Validation of Solutions in the Context of Online Communities

An Expertise-Based Learning Approach

Online Communities are considered as a new organizational structure that allows individuals and groups of persons to collaborate and share their knowledge and experiences. These members need technological support in order to facilitate their learning activities (e.g. during a problem solving process).We address in this paper the problem of social validation, our aim being to support members of Online Communities of Learners to validate the proposed solutions. Our approach is based on the members’ evaluations: we apply three machine learning techniques, namely a Genetic Algorithm, Artificial Neural Networks and the Naïve Bayes approach. The main objective is to determine a validity rating of a given solution. A preliminary experimentation of our approach within a Community of Learners whose main objective is to collaboratively learn the Java language shows that Neural Networks represent the most suitable approach in this context.

Lydia Nahla Driff, Lamia Berkani, Ahmed Guessoum, Abdellah Bendjahel

Remotely Sensed Data Clustering Using K-Harmonic Means Algorithm and Cluster Validity Index

In this paper, we propose a new clustering method based on the combination of K-harmonic means (KHM) clustering algorithm and cluster validity index for remotely sensed data clustering. The KHM is essentially insensitive to the initialization of the centers. In addition, cluster validity index is introduced to determine the optimal number of clusters in the data studied. Four cluster validity indices were compared in this work namely, DB index, XB index, PBMF index, WB-index and a new index has been deduced namely, WXI. The Experimental results and comparison with both K-means (KM) and fuzzy C-means (FCM) algorithms confirm the effectiveness of the proposed methodology.

Habib Mahi, Nezha Farhi, Kaouter Labed

Computational Intelligence: BioInformatics

Frontmatter

Comparison of Automatic Seed Generation Methods for Breast Tumor Detection Using Region Growing Technique

Seeded Region Growing algorithm is observed to be successfully implemented as a segmentation technique of medical images. This algorithm starts by selecting a seed point and, growing seed area through the exploitation of the fact that pixels which are close to each other have similar features. To improve the accuracy and effectiveness of region growing segmentation, some works tend to automate seed selection step. In this paper, we present a comparative study of two automatic seed selection methods for breast tumor detection using seeded region growing segmentation. The first method is based on thresholding technique and the second method is based on features similarity. Each method is applied on two modalities of breast digital images. Our results show that seed selection method based on thresholding technique is better than seed selection method based on features similarity.

Ahlem Melouah

IHBA: An Improved Homogeneity-Based Algorithm for Data Classification

The standard Homogeneity-Based (SHB) optimization algorithm is a metaheuristic which is proposed based on a simultaneously balance between fitting and generalization of a given classification system. However, the SHB algorithm does not penalize the structure of a classification model. This is due to the way SHB’s objective function is defined. Also, SHB algorithm uses only genetic algorithm to tune its parameters. This may reduce SHB’s freedom degree. In this paper we have proposed an Improved Homogeneity-Based Algorithm (IHBA) which adopts computational complexity of the used data mining approach. Additionally, we employs several metaheuristics to optimally find SHB’s parameters values. In order to prove the feasibility of the proposed approach, we conducted a computational study on some benchmarks datasets obtained from UCI repository. Experimental results confirm the theoretical analysis and show the effectiveness of the proposed IHBA method.

Fatima Bekaddour, Chikh Mohammed Amine

Multiple Guide Trees in a Tabu Search Algorithm for the Multiple Sequence Alignment Problem

Nowadays, Multiple Sequence Alignment (MSA) approaches do not always provide consistent solutions. In fact, alignments become increasingly difficult when treating low similarity sequences. Tabu Search is a very useful meta-heuristic approach in solving optimization problems. For the alignment of multiple sequences, which is a NP-hard problem, we apply a tabu search algorithm improved by several neighborhood generation techniques using guide trees. The algorithm is tested with the BAliBASE benchmarking database, and experiments showed encouraging results compared to the algorithms studied in this paper.

Tahar Mehenni

Information Technology: Text and Speech Processing

Frontmatter

Noise Robust Features Based on MVA Post-processing

In this paper we present effective technique to improve the performance of the automatic speech recognition (ASR) system. This technique consisting mean subtraction, variance normalization and application of temporal auto regression moving average (ARMA) filtering. This technique is called MVA. We applied MVA as post-processing stage to Mel frequency cespstral coefficients (MFCC) features and Perceptual Linear Prediction (RASTA-PLP) features, to improve automatic speech recognition (ASR) system.

We evaluate MVA post-processing scheme with aurora 2 database, in presence of various additive noise (subway, babble because, exhibition hall, restaurant, street, airport, train station). Experimental results demonstrate that our method provides substantial improvements in recognition accuracy for speech in the clean training case. We have completed study by comparing MFCC and RSTA-PLP After MVA post processing.

Mohamed Cherif Amara Korba, Djemil Messadeg, Houcine Bourouba, Rafik Djemili

Arabic Texts Categorization: Features Selection Based on the Extraction of Words’ Roots

One of methods used to reduce the size of terms vocabulary in Arabic text categorization is to replace the different variants (forms) of words by their common root. The search of root in Arabic or Arabic word root extraction is more difficult than other languages since Arabic language has a very different and difficult structure, that is because it is a very rich language with complex morphology. Many algorithms are proposed in this field. Some of them are based on morphological rules and grammatical patterns, thus they are quite difficult and require deep linguistic knowledge. Others are statistical, so they are less difficult and based only on some calculations. In this paper we propose a new statistical algorithm which permits to extract roots of Arabic words using the technique of n-grams of characters without using any morphological rule or grammatical patterns.

Said Gadri, Abdelouahab Moussaoui

Restoration of Arabic Diacritics Using a Multilevel Statistical Model

Arabic texts are generally written without diacritics. This is the case for instance in newspapers, contemporary books, etc., which makes automatic processing of Arabic texts more difficult. When diacritical signs are present, Arabic script provides more information about the meanings of words and their pronunciation. Vocalization of Arabic texts is a complex task which may involve morphological, syntactic and semantic text processing.

In this paper, we present a new approach to restore Arabic diacritics using a statistical language model and dynamic programming. Our system is based on two models: a bi-gram-based model which is first used for vocalization and a 4-gram character-based model which is then used to handle the words that remain non vocalized (OOV words). Moreover, smoothing methods are used in order to handle the problem of unseen words. The optimal vocalized word sequence is selected using the Viterbi algorithm from Dynamic Programming.

Our approach represents an important contribution to the improvement of the performance of automatic Arabic vocalization. We have compared our results with some of the most efficient up-to-date vocalization systems; the experimental results show the high quality of our approach.

Mohamed Seghir Hadj Ameur, Youcef Moulahoum, Ahmed Guessoum

A New Multi-layered Approach for Automatic Text Summaries Mono-Document Based on Social Spiders

In this paper, we propose a new multi layer approach for automatic text summarization by extraction where the first layer constitute to use two techniques of extraction: scoring of phrases, and similarity that aims to eliminate redundant phrases without losing the theme of the text. While the second layer aims to optimize the results of the previous layer by the metaheuristic based on social spiders. the objective function of the optimization is to maximize the sum of similarity between phrases of the candidate summary in order to keep the theme of the text, minimize the sum of scores in order to increase the summarization rate, this optimization also will give a candidate’s summary where the order of the phrases changes compared to the original text.The third and final layer aims to choose the best summary from the candidate summaries generated by layer optimization, we opted for the technique of voting with a simple majority.

Mohamed Amine Boudia, Reda Mohamed Hamou, Abdelmalek Amine, Mohamed Elhadi Rahmani, Amine Rahmani

Building Domain Specific Sentiment Lexicons Combining Information from Many Sentiment Lexicons and a Domain Specific Corpus

Most approaches to sentiment analysis requires a sentiment lexicon in order to automatically predict sentiment or opinion in a text. The lexicon is generated by selecting words and assigning scores to the words, and the performance the sentiment analysis depends on the quality of the assigned scores. This paper addresses an aspect of sentiment lexicon generation that has been overlooked so far; namely that the most appropriate score assigned to a word in the lexicon is dependent on the domain. The common practice, on the contrary, is that the same lexicon is used

without adjustments

across different domains ignoring the fact that the scores are normally highly sensitive to the domain. Consequently, the same lexicon might perform well on a single domain while performing poorly on another domain, unless some score adjustment is performed. In this paper, we advocate that a sentiment lexicon needs some further adjustments in order to perform well in a specific domain. In order to cope with these domain specific adjustments, we adopt a stochastic formulation of the sentiment score assignment problem instead of the classical deterministic formulation. Thus, viewing a sentiment score as a stochastic variable permits us to accommodate to the domain specific adjustments. Experimental results demonstrate the feasibility of our approach and its superiority to generic lexicons without domain adjustments.

Hugo Hammer, Anis Yazidi, Aleksander Bai, Paal Engelstad

Improved Cuckoo Search Algorithm for Document Clustering

Efficient document clustering plays an important role in organizing and browsing the information in the World Wide Web. K-means is the most popular clustering algorithms, due to its simplicity and efficiency. However, it may be trapped in local minimum which leads to poor results. Recently, cuckoo search based clustering has proved to reach interesting results. By against, the number of iterations can increase dramatically due to its slowness convergence. In this paper, we propose an improved cuckoo search clustering algorithm in order to overcome the weakness of the conventional cuckoo search clustering. In this algorithm, the global search procedure is enhanced by a local search method. The experiments tests on four text document datasets and one standard dataset extracted from well known collections show the effectiveness and the robustness of the proposed algorithm to improve significantly the clustering quality in term of fitness function, f-measure and purity.

Saida Ishak Boushaki, Nadjet Kamel, Omar Bendjeghaba

Information Technology: Requirement Engineering

Frontmatter

Supporting Legal Requirements in the Design of Public Processes

Nowadays, business processes have become an ubiquitous part in public institutions, and the success of an e-government system depends largely on their effectiveness. However, despite the large number of techniques and technologies that are successfully used in the private sector, these cannot be transferred directly to public institutions without taking into account the strongly hierarchical nature and the rigorous legal basis on which public processes are based. This work presents an approach allowing the consideration of the legal requirements during the public processes design. Its main particularity is that these requirements are encapsulated using a legal features model supporting a formal semantic. This one prevents the violation of legal requirements and ensures that the processes evolution will in compliance with them.

Amina Cherouana, Latifa Mahdaoui

Requirement Analysis in Data Warehouses to Support External Information

In strategic decision-making, the decision maker needs to exploit the strategic information provided by decision support systems (DSS) and the strategic external information emanating from the enterprise business environment. The data warehouse (DW) is the main component of a data-driven DSS. In the field of DW design, many approaches exist but ignore external information and focus only on internal information coming from the operational sources. The existing approaches do not provide any instrument to take into account external information. In this paper, our objective is to introduce two models that will be employed in our approach: the requirement model and the environment model. These models are the basis of our DW design approach that supports external information. To evaluate the requirement model, we will illustrate with an example how to obtain external information useful for decision-making.

Mohamed Lamine Chouder, Rachid Chalal, Waffa Setra

Engineering the Requirements of Data Warehouses: A Comparative Study of Goal-Oriented Approaches

There is a consensus that the requirements analysis phase in the development project of a data warehouse (DW) is of critical importance. It is equivalent to application of requirements engineering (RE) activities, to identify the useful information for decision-making, to be met by the DW. Many approaches has been proposed in this field. Our focus is on goal-oriented approaches which are requirement-driven DW design approaches. We are interested in investigating to what extent these approaches went well with respect to the RE process. Thus, theoretical foundations about RE are presented, including the classical RE process. After that, goal-oriented DW design approaches are described briefly; and evaluation criteria, supporting a comparative study of these approaches, are provided.

Waffa Setra, Rachid Chalal, Mohamed Lamine Chouder

Information Technology: OLAP and Web Services

Frontmatter

Research and Analysis of the Stream Materialized Aggregate List

The problem of low-latency processing of large amounts of data acquired in continuously changing environment has led to the genesis of Stream Processing Systems (SPS). However, sometimes it is crucial to process both historical (archived) and current data, in order to obtain full knowledge about various phenomena. This is achieved in a Stream Data Warehouse (StrDW), where analytical operations on both historical and current data streams are performed. In this paper we focus on Stream Materialized Aggregate List (StrMAL) – a stream repository tier of StrDW. As a motivating example, the liquefied petrol storage and distribution system, containing continuous telemetric data acquisition, transmission and storage, will be presented as possible application for Stream Materialized Aggregate List.

Marcin Gorawski, Krzysztof Pasterak

SOLAP On-the-Fly Generalization Approach Based on Spatial Hierarchical Structures

On-the-fly generalization, denotes the use of automated generalization techniques in real-time. This process creates a temporary, generalized dataset exclusively for visualization, not for storage or other purposes. This makes the process well suited to highly interactive applications such as online mapping, mobile mapping and SOLAP. BLG tree is a spatial hierarchical structure widely used in cartographic map generalization and particularly in the context of web mapping. However, this structure is insufficient in the context of SOLAP applications, because it is mainly dedicated to the geographic information processing (geometric features), while SOLAP applications manage a very important decision information that is the measure. In this paper, we propose a new structure, SOLAP BLG Tree, adapted to the generalizaion process in the SOLAP context. Our generalization approach is based on this structure and uses the simplification operator. Combining the topological aspect of geographical objects and the decisional aspect (the measure).

Our experiments were performed on a set of vector data related to the phenomenon of road risk.

Tahar Ziouel, Khalissa Amieur-Derbal, Kamel Boukhalfa

QoS-Aware Web Services Selection Based on Fuzzy Dominance

The selection of an appropriate web service for a particular task has become a difficult challenge due to the increasing number of web services offering similar functionalities. Quality of web services (QoS) becomes crucial for selecting web services among functionally similar components. However, it remains difficult to select an interesting Web services from a large number of candidates with a good compromise between multiples QoS aspect. In this paper, we propose a novel concept based on dominance degree to rank functionally similar services. We rank Web services by using a fuzzification of Pareto dominance called Average-Fuzzy-Dominated-Score(

AFDetS

()). We demonstrate the effectiveness of the

AFDetS

through a set of simulations by using a real Dataset.

Amal Halfaoui, Fethallah Hadjila, Fedoua Didi

Information Technology: Recommender Systems and Web Services

Frontmatter

A Hybrid Model to Improve Filtering Systems

There is a continuous information overload on the Web. The problem treated is how to have relevant information (documents, products, services etc.) at time and without difficulty. Filtering system also called recommender systems have widely used to recommend relevant resources to users by similarity process such as Amazon, MovieLens, Cdnow etc. The trend is to improve the information filtering approaches to better answer the users expectations. In this work, we model a collaborative filtering system by using Friend Of A Friend (FOAF) formalism to represent the users and the Dublin Core (DC) vocabulary to represent the resources “items”. In addition, to ensure the interoperability and openness of this model, we adopt the Resource Description Framework (RDF) syntax to describe the various modules of the system. A hybrid function is introduced for the calculation of prediction. Empirical tests on various real data sets (Book-Crossing, FoafPub) showed satisfactory performances in terms of relevance and precision.

Kharroubi Sahraoui, Dahmani Youcef, Nouali Omar

Towards a Recommendation System for the Learner from a Semantic Model of Knowledge in a Collaborative Environment

Collaboration is a common work between many people which generates the creation of a common task. A computing environment can foster collaboration among peers to exchange and share knowledge or skills for succeeding a common project. Therefore, when users interact among themselves and with an environment, they provide a lot of information. This information is recorded and classified in a model of traces to be used to enhance collaborative learning. In this paper, we propose (1) the refinement of a semantic model of traces with indicators calculated according to Bayes formulas and (2) the exploitation of these indicators to provide recommendations to the learner to reinforce learning points with learners, of his/her community of collaboration, identified as "experts".

Chahrazed Mediani, Marie-Hélène Abel, Mahieddine Djoudi

Toward a New Recommender System Based on Multi-criteria Hybrid Information Filtering

The Communities of Practice of E-learning (CoPEs) are virtual spaces that facilitate learning and acquisition of new knowledge for its members. To achieve these objectives CoPE members exchange and share learning resources that can be (online courses, URLs, articles, theses, etc ...). The growing number of adherents to the CoPE increases the number of learning resources inserted into the memory of this learning space. As consequence, access to relevant learning resource and collaboration between members who have similar needs become even more difficult. Therefore, recommender systems are required to facilitate such tasks. In this paper we propose a personalized recommendation approach dedicated to CoPE that we call Three Dimensions Hybrid Recommender System (3DHRS). The approach is hybrid as it uses collaborative filtering supported by content based filtering to eliminate the problems of cold start and new item. Furthermore, it considers three criteria namely role, interest and evaluation to efficiently solve the new user, and sparsity issues. A prototype of the proposed system has been implemented and evaluated through the use of Moodle platform as it hosts many communities of practice. Very promising results in terms of mean absolute error have been obtained.

Hanane Zitouni, Omar Nouali, Souham Meshoul

Information Technology: Ontologies

Frontmatter

A New Approach for Combining the Similarity Values in Ontology Alignment

Ontology Alignment is the process of identifying semantic correspondences between their entities. It is proposed to enable semantic interoperability between various knowledge sources that are distributed and heterogeneous. Most existing ontology alignment systems are based on the calculation of similarities and often proceed by their combination. The work presented in this paper consists of an approach denoted PBW (Precision Based Weighting) which estimates the weights to assign to matchers for aggregation. This approach proposes to measure the confidence accorded to a matcher by estimating its precision. The experimental study that we have carried out has been conducted on the Conference track of the evaluation campaign OAEI 2012. We have compared our approach with two methods considered as the most performed in recent years, namely those based on the concepts harmony and local confidence trust respectively. The results show the good performance of our approach. Indeed, it is better in terms of precision, than existing methods with which it has been compared.

Moussa Benaissa, Abderrahmane Khiat

Exact Reasoning over Imprecise Ontologies

A real world of objects (individuals) is represented by a set of assertions written with respect to defined syntax and semantics of description logic (formal language). These assertions should be consistent with the ontology axioms described as terminology of knowledge. The axioms and the assertions represent ontology about a particular domain. A real world is a possible world if all the assertions and the axioms over its set of individuals, are consistent. It is possible then to query the possible world by specific assertions (as instance checking) to determine if they are consistent with it or not. However, ontology can contain vague concepts which means the knowledge about them is imprecise and then query answering will not possible due to the open world assumption if the necessary information is incomplete (it is currently absent). A concept description can be very exact (crisp concept) or exact (fuzzy concept) if its knowledge is complete, otherwise it is inexact (vague concept) if its knowledge is incomplete. In this paper we propose a vagueness theory based on the definition of truth gaps as ontology assertions to express the vague concepts in Ontology Web Language (OWL2) (which is based on the description logic SROIQ(D)) and an extension of the Tableau algorithm for reasoning over imprecise ontologies.

Mustapha Bourahla

Defining Semantic Relationships to Capitalize Content of Multimedia Resources

Existing systems or architectures hardly provide any way to localize sub-parts of multimedia objects (e.g. sub regions of images, persons, events…), which represents hidden semantics of resources. To simplify and automate discovering hidden connections between such resources, we describe and evaluate in this paper, an algorithm for creating semantic relationships between multimedia news resources, giving a contextual schema (represented in RDF) as a result. This latter, which could eventually be used under any retrieval system, is integrated in our main multimodal retrieval system.

We have also proposed and introduced a special measure of accuracy since evaluation relies on users’ intentions. An experimental evaluation of our algorithm is presented, showing encouraging results.

Mohamed Kharrat, Anis Jedidi, Faiez Gargouri

Security and Network Technologies: Security

Frontmatter

A Multi-agents Intrusion Detection System Using Ontology and Clustering Techniques

Nowadays, the increase in technology has brought more sophisticated intrusions. Consequently, Intrusion Detection Systems (IDS) are quickly becoming a popular requirement in building a network security infrastructure. Most existing IDS are generally centralized and suffer from a number of drawbacks,

e.g.

, high rates of false positives, low efficiency, etc, especially when they face distributed attacks. This paper introduces a novel hybrid multi-agents IDS based on the intelligent combination of a clustering technique and an ontology model, called OCMAS-IDS. The latter integrates the desirable features provided by the multi-agents methodology with the benefits of semantic relations as well as the high accuracy of the data mining technique. Carried out experiments showed the efficiency of our distributed IDS, that sharply outperforms other systems over real traffic and a set of simulated attacks.

Imen Brahmi, Hanen Brahmi, Sadok Ben Yahia

On Copulas-Based Classification Method for Intrusion Detection

The intent of this paper is to develop a nonparametric classification method using copulas to estimate the conditional probability for an element to be a member of a connected class while taking into account the dependence of the attributes of this element. This technique is suitable for different types of data, even those whose probability distribution is not Gaussian. To improve the effectiveness of the method, we apply it to a problem of network intrusion detection where prior classes are topologically connected.

Abdelkader Khobzaoui, Mhamed Mesfioui, Abderrahmane Yousfate, Boucif Amar Bensaber

On-Off Attacks Mitigation against Trust Systems in Wireless Sensor Networks

Trust and reputation systems have been regarded as a powerful tool to defend against insider attacks caused by the captured nodes in wireless sensor networks (WSNs). However, trust systems are vulnerable to on-off attacks, in which malicious nodes can opportunistically behave good or bad, compromising the network with the hope that bad behavior will be undetected. Thus, malicious nodes can remain trusted while behaving badly. In this paper, we propose O

2

Trust, On-Off attack mitigation for Trust systems in wireless sensor networks. O

2

Trust adopts the penalty policy against the misbehavior history of each node in the network as a reliable factor that should influence on the calculation of the trust value. This punishment future helps to perceive malicious node that aim to launch intelligent attacks against trust-establishment and consequently on-off attack is mitigated efficiently.

Nabila Labraoui, Mourad Gueroui, Larbi Sekhri

A Real-Time PE-Malware Detection System Based on CHI-Square Test and PE-File Features

Constructing an efficient malware detection system requires taking into consideration two important aspects, which are the accuracy and the detection time. However, finding an appropriate balance between these two characteristics remains at this time a very challenging problem. In this paper, we present a real-time PE (Portable Executable) malware detection system, which is based on the analysis of the information stored in the PE-Optional Header fields (PEF). Our system used a combination of the Chi-square (KHI

2

) score and the Phi (

ϕ

) coefficient as feature selection method. We have evaluated our system using Rotation Forest classifier implemented in WEKA and we reached more than 97% of accuracy. Our system is able to categorize a file in 0.077 seconds, which makes it adequate for real-time detection of malware.

Mohamed Belaoued, Smaine Mazouzi

Security and Network Technologies: Wireless Sensor Networks

Frontmatter

Balanced and Safe Weighted Clustering Algorithm for Mobile Wireless Sensor Networks

The main concern of clustering approaches for mobile wireless sensor networks (WSNs) is to prolong the battery life of the individual sensors and the network lifetime. In this paper, we propose a balanced and safe weighted clustering algorithm which is an extended version of our previous algorithm (ES-WCA) for mobile WSNs using a combination of five metrics. Among these metrics lie the behavioral level metric which promotes a safe choice of a cluster head in the sense where this last one will never be a malicious node. The goals of the proposed algorithm are: offer better performance in terms of the number of re-affiliations which enables to generate a reduced number of balanced and homogeneous clusters, this algorithm, coupled with suitable routing protocols, aims to maintain stable clustering structure. We implemented and tested a simulation of the proposed algorithm to demonstrate its performance.

Amine Dahane, Nasr-Eddine Berrached, Abdelhamid Loukil

Distributed Algorithm for Coverage and Connectivity in Wireless Sensor Networks

Even if several algorithms were proposed in the literature to solve the coverage problem in Wireless Sensor Networks (WSNs), they still suffer from some weaknesses. This is the reason why we suggest in this paper, a distributed protocol, called Single Phase Multiple Initiator (SPMI). Its aim is to find Connect Cover Set (CCS) for assuring the coverage and connectivity in WSN. Our idea is based on determining a Connected Dominating Set (CDS) which has a minimum number of necessary and sufficient nodes to guarantee coverage of the area of interested (AI), when WSN model is considered as a graph. The suggested protocol only requires a single phase to construct a CDS in distributed manner without using sensors’ location information. Simulation results show that SPMI assures better coverage and connectivity of AI by using fewer active nodes and by inducing very low message overhead, and low energy consumption, when compared with some existing protocols.

Abdelkader Khelil, Rachid Beghdad

Optimizing Deployment Cost in Camera-Based Wireless Sensor Networks

We discuss in this paper a deployment optimization problem in camera-based wireless sensor networks. In particular, we propose a mathematical model to solve the problem of minimizing the number of cameras required to cover a set of targets with a given level of quality. Since solving this kind of problems with exact methods is computationally expensive, we rather rely on an adapted version of

Binary Particle Swarm Optimization

(BPSO). Our preliminary results are motivating since we obtain near-optimal solutions in few iterations of the algorithm. We discuss also the relevance of hybrid meta-heuristics and parallel algorithms in this context.

Mehdi Rouan Serik, Mejdi Kaddour

A version of LEACH Adapted to the Lognormal Shadowing Model

The most protocols designed for wireless sensor networks (WSNs) have been developed for an ideal environment represented by unit disc graph model (UDG) in which the data is considered as successfully received if the communicating nodes are within the transmission range of each other. However, these protocols do not take into account the fluctuations of radio signal that can happen in realistic environment. This paper aims to adapt LEACH protocol for realistic environment since LEACH is considered as the best cluster-based routing protocol in terms of energy consumption for WSNs. We have carried out an evaluation of LEACH based on two models; lognormal shadowing model (LNS) in which the probability of reception without error is calculated according to the Euclidian distance separating the communicating nodes and probabilistic model in which the probability of reception is generated randomly. In both models, if the probability of successful reception is lower than a predefined threshold, a multi-hop communication is incorporated for forwarding data between cluster-heads (CHs) towards the base station instead of direct communication as in original version of LEACH. The main aims of this contribution are minimizing energy consumption and guaranteeing reliable data delivery to the base station. The simulation results show that our proposed algorithm outperforms the original LEACH for both models in terms of energy consumption and ratio of successful received packets.

Chifaa Tabet Hellel, Mohamed Lehsaini, Hervé Guyennet

Security and Network Technologies: Energy and Synchronisation

Frontmatter

High Velocity Aware Clocks Synchronization Approach in Vehicular Ad Hoc Networks

Clock synchronization plays an important role in communications organization between applications in Vehicular Ad hoc NETworks (VANETs) requiring a strong need for coordination. Having a global time reference or knowing the value of a physical clock (indeed with an acceptable approximation) of cooperative process involved in the provision of a service by distributed applications, takes on a fundamental importance in decentralized systems, particularly in VANETs. The intrinsic and constraining features of VANETs, especially the high mobility of vehicles make the clock synchronization mechanisms more complex and require a concise and a specific adequacy. The aim of the work reported in this paper is to propose a new protocol for clocks synchronization for VANETs, sufficiently robust, with a good precision, and convenient to the main constraint such high nodes mobility. Our proposed protocol, named Time Table Diffusion (TTD), was simulated using a combination of two simulators: VanetMobiSim and NS2 to evaluate its performance in terms of convergence time and number of messages generated. The obtained results were conclusive.

Khedidja Medani, Makhlouf Aliouat, Zibouda Aliouat

An Energy-Efficient Fault-Tolerant Scheduling Algorithm Based on Variable Data Fragmentation

In this article, we propose an approach to build fault-tolerant distributed real-time embedded systems. From a given system description and a given fault hypothesis, we generate automatically a fault tolerant distributed schedule that achieves low energy consumption and high reliability efficiency. Our scheduling algorithm is dedicated to multi-bus heterogeneous architectures with multiple processors linked by several shared buses, which take as input a given system description and a given fault hypothesis. It is based on active redundancy to mask a fixed number

L

of processor failures supported in the system, and passive redundancy based on variable data fragmentation to tolerate

N

buses failures. In order to maximize the systems reliability, the replicas of each operation are scheduled on different reliable processors and the size of each fragmented data depends on GSFR and the bus failure rates. Finally, we show with an example that our approach can maximize reliability and reduce energy consumption when using active redundancy.

Chafik Arar, Mohamed Salah Khireddine, Abdelouahab Belazoui, Randa Megulati

Genetic Centralized Dynamic Clustering in Wireless Sensor Networks

In order to overcome the energy loss involved by communications in wireless sensor networks (WSN), the use of clustering has proven to be effective. In this paper, we proposed a dynamic centralized genetic algorithm (GA)-based clustering approach to optimize the clustering configuration (cluster heads and cluster members) to limit node energy consumption. The obtained simulation results show that the proposed technique overcomes the LEACH clustering algorithm.

Mekkaoui Kheireddine, Rahmoun Abdellatif, Gianluigi Ferrari

Security and Network Technologies: Potpourri

Frontmatter

Region-Edge Cooperation for Image Segmentation Using Game Theory

Image segmentation is a central problem in image analysis. It consists of extracting objects from an image and separating between the background and the regions of interest. In the literature, there are mainly two dual approaches, namely the region-based segmentation and the edge-based segmentation. In this article, we propose to take advantage of Game theory in image segmentation by results fusion. Thus, the presented game is cooperative in a way that both players represented by the two segmentation modules (region-based and edge-based) try coalitionary to enhance the value of a common characteristic function. This is a variant of the parallel decision-making procedure based on Game theory proposed by Chakraborty and Duncan [1]. The involved pixels are those generated from the cooperation by results fusion between the edge detector (Active contour) and the region detector (Region growing) posing a decision-making problem. Adding or removing a pixel (to/from) the region of interest depends strongly on the value of the characteristic function. Then, and to study the effectiveness and noise robustness of our approach we proposed to generalize our experimentations, by applying this technique on a variety of images of different types taken mainly from two known test databases.

Omar Boudraa, Karima Benatchba

Improved Parameters Updating Algorithm for the Detection of Moving Objects

The presence of dynamic scene is a challenging problem in video surveillance systems tasks. Mixture of Gaussian (MOG) is the most appropriate method to model dynamic background. However, local variations and the instant variations in the brightness decrease the performance of the later. We present in this paper a novel and efficient method that will significantly reduce MOG drawbacks by an improved parameters updating algorithm. Starting from a normalization step, we divide each extracted frame into several blocks. Then, we apply an improved updating algorithm for each block to control local variation. When a significant environment changes are detected in one or more blocs, the parameters of MOG assigned to these blocks are updated and the parameters of the rest remain the same. Experimental results demonstrate that the proposed approach is effective and efficient compared with state-of-the-art background subtraction methods.

Brahim Farou, Hamid Seridi, Herman Akdag

Towards Real-Time Co-authoring of Linked-Data on the Web

Real-time co-authoring of Linked-Data (LD) on the Web is becoming a challenging problem in the Semantic Web area. LD consists of RDF (Resource Description Framework) graphs. We propose to apply state-of-the art collaborative editing techniques to manage shared RDF graphs and to control the concurrent modifications. In this paper, we present two concurrency control techniques. The first one is based on client-server architecture. The second one is more flexible as it enables the collaborative co-authoring to be deployed in mobile and P2P architecture and it supports dynamic groups where users can leave and join at any time.

Moulay Driss Mechaoui, Nadir Guetmi, Abdessamad Imine

Software Engineering: Modeling and Meta Modeling

Frontmatter

A High Level Net for Modeling and Analysis Reconfigurable Discrete Event Control Systems

This paper deals with automatic reconfiguration of discrete event control systems. We propose to enrich the formalism of recursive Petri nets by the concept of

feature

from which runtime reconfigurations are facilitated. This new formalism is applied in the context of automated production system. Furthermore, the enhanced recursive Petri net is translated into rewriting logic, and by using Maude LTL model-checker one can verify several behavioural properties related to reconfiguration.

Ahmed Kheldoun, Kamel Barkaoui, JiaFeng Zhang, Malika Ioualalen

Hybrid Approach for Metamodel and Model Co-evolution

Evolution is an inevitable aspect which affects metamodels. When metamodels evolve, model conformity may be broken. Model co-evolution is critical in model driven engineering to automatically adapt models to the newer versions of their metamodels. In this paper we discuss what can be done to transfer models between versions of a metamodel. For this purpose we introduce hybrid approach for model and metamodel co-evolution, that first uses matching between two metamodels to discover changes and then applied evolution operators to migrate models. In this proposal, migration of models is done automatically; except, for non resolvable changes, where assistance is proposed to the users in order to co-evolve their models to regain conformity.

Fouzia Anguel, Abdelkrim Amirat, Nora Bounour

Extracting and Modeling Design Defects Using Gradual Rules and UML Profile

There is no general consensus on how to decide if a particular design violates a model quality. In fact, we find in literature some defects described textually, detecting these design defects is usually a difficult problem. Deciding which object suffer from one defect depends heavily on the interpretation of each analyst. Experts often need to minimize design defects in software systems to improve the design quality. In this paper we propose a design defect detection approach based on object oriented metrics. We generate, using gradual rules, detection rules for each design defect at model level. We aim to extract, for each design defects, the correlation of co-variation of object oriented metrics. They are then modeled in a standard way, using the proposed UML profile for design defect modeling. We experiment our approach on 16 design defects using 32 object oriented metrics.

Mohamed Maddeh, Sarra Ayouni

An Approach to Integrating Aspects in Agile Development

Separation of concerns is an important principle that helps to improve reusability and simplify evolution. The crosscutting concerns like security, and many others, often exist before implementation, in both the analysis and design phases, it is therefore worthwhile to develop aspects oriented software development approaches to handle properly the concerns and ensure their separation.

Moreover agile methods attempt to reduce risk and maximize productivity by carrying out software development with short iterations while limiting the importance of secondary or temporary artifacts, however these approaches have problems dealing with the crosscutting nature of some stakeholders’ requirements. The work presented in this paper aims at enriching the agile development using aspect oriented approaches. By taking into account the crosscutting nature of some stakeholders’ requirements, the combination of the two approaches improves the software changeability during the repeated agile iterations.

Tadjer Houda, Meslati Djamel

Software Engineering: Checking and Verification

Frontmatter

On the Optimum Checkpointing Interval Selection for Variable Size Checkpoint Dumps

Checkpointing is a technique that is often employed for granting fault tolerance for applications executing in failure-prone environments. It consists on regularly saving the application’s state in another and fault independent storage such that if the application fails, it can be continued without necessarily restarting it. In this context, fixing the checkpointing frequency is an important topic which we address in this paper. We particularly address this issue considering hybrid fault tolerance and variable size checkpoint dumps. We then evaluate our solution and compare it with state of the art models, and show that our solution brings better results.

Samy Sadi, Belabbas Yagoubi

Monitoring Checklist for Ceph Object Storage Infrastructure

Object storage cloud is widely used to store unstructured data like photo, emails, video etc. generated from use of digital technologies. The number of object storage services has increased rapidly over the years and so is increased the complexity of the infrastructure behind it. Effective and efficient monitoring is constantly needed to properly operate and manage the complex object storage infrastructure. Ceph is an open source cloud storage platform that provides object storage as a service. Several works have discussed ways to collect the data for monitoring. However, there is little mention of what needs to be monitored. In this paper, we provide an infrastructure monitoring list for Ceph object storage cloud. We analyze the Ceph storage infrastructure and its processes for identifying the proposed lists. The infrastructure monitoring list allows selecting requirements, in contrast to, specifying fresh requirements, for monitoring. The monitoring list helps developer during requirement elicitation of the monitoring functionality when developing a new tool or updating an existing one. The checklist is also useful during monitoring activity for selecting parameters that need to be monitored by the system administrator.

Pragya Jain, Anita Goel, S. C. Gupta

Towards a Formalization of Real-Time Patterns-Based Designs

Informal description (UML and text) of design patterns is adopted to facilitate their understanding by software developers. However, these descriptions lead to ambiguities, mainly when we consider Real time Design Patterns that deal with critical problems encountered in the design of real-time systems. Hence, there is a need for formal specification of the DPs and RTDPs to insure their successful application. In this paper, we propose a formalization approach of the system design based on real-time patterns (RTDPs). The processes of instantiation and composition of design patterns, permit us to generate design models (structural and dynamic) of complex systems. The resulting designs are represented in UML-MARTE profile to express the temporal properties and constraints. The algebraic specifications (in Maude language) become more natural and more efficient.

Kamel Boukhelfa, Faiza Belala

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise