Skip to main content

2013 | Buch

Modeling Approaches and Algorithms for Advanced Computer Applications

insite
SUCHEN

Über dieses Buch

"During the last decades Computational Intelligence has emerged and showed its contributions in various broad research communities (computer science, engineering, finance, economic, decision making, etc.). This was done by proposing approaches and algorithms based either on turnkey techniques belonging to the large panoply of solutions offered by computational intelligence such as data mining, genetic algorithms, bio-inspired methods, Bayesian networks, machine learning, fuzzy logic, artificial neural networks, etc. or inspired by computational intelligence techniques to develop new ad-hoc algorithms for the problem under consideration.

This volume is a comprehensive collection of extended contributions from the 4th International Conference on Computer Science and Its Applications (CIIA’2013) organized into four main tracks: Track 1: Computational Intelligence, Track 2: Security & Network Technologies, Track 3: Information Technology and Track 4: Computer Systems and Applications. This book presents recent advances in the use and exploitation of computational intelligence in several real world hard problems covering these tracks such as image processing, Arab text processing, sensor and mobile networks, physical design of advanced databases, model matching, etc. that require advanced approaches and algorithms borrowed from computational intelligence for solving them.

Inhaltsverzeichnis

Frontmatter
New Challenges for Future Avionic Architectures

Electronic sets operated on aircraft are usually summarized as “avionic architectures” (for “aviation electronic architecture”). Since the 70s, avionic architectures, composed of digital processing modules and communication buses, are supporting more and more avionic applications such as flight control, flight management, etc. Hence, avionic architectures have become a central component of an aircraft. They have to ensure a large variety of important requirements: safety, robustness to equipment failures, determinism, and real-time. In response to these requirements, aircraft manufacturers have proposed several solutions. This article has a twin objectives: firstly to survey the state of the art of existing avionic architectures, including the IMA (for Integrated Modular Avionic) architecture of the most recent aircraft; and secondly to discuss two challenges for the next generation of avionic architectures: reconfiguration capabilities, and integrating COTS processing equipment such as multi-core processors. We believe that these two challenges will be central to the next generation of IMA architectures (called IMA-2G for IMA-2d generation).

Frédéric Boniol
Checking System Substitutability: An Application to Interactive Systems

The capability to substitute a given system by another one is a property useful for dealing with adaptation, maintenance, interoperability, reliability, etc. This talk proposes a formally based approach for checking the substitutability of a system by another one. It exploits the weak bi-simulation relationship.

In this talk a system is seen as a state-transition system. Two systems are observed to check if one may be substituted by the other preserving their behaviour. The weak bi-simulation relationship is revisited to handle systems that have different sets of labels by defining a relation on labels. A transformation of the systems to be compared is defined according to the relation defined on labels. Classical weak bi-simulation is then used to model check the substitutability property.

The approach is illustrated on the case of plastic interactive systems. We show how an interactive system supporting a set of interactive tasks can be replaced by another interactive system that performs the same tasks with different interaction devices. Relations on labels are borrowed from an ontology of interaction and of interaction devices. A case study will be used along the talk to illustrate how the proposed approach practically works.

Yamine Ait Ameur, Abdelkrim Chebieb
The Role of Software Tracing in Software Maintenance

Our society depends greatly on software systems for many critical activities including finance, health, education, telecommunications, aerospace, and more. Maintaining these systems to keep up with new user needs and ever changing technologies is an important task, but also a challenging and costly one. Research shows that software maintenance activities can take up to 80% of the time and effort spent during the lifecycle of software. The constant challenge for software maintainers is to understand what a system does before making any changes to it. In an ideal situation, this understanding should come from system documentation, but, for a variety of reasons, maintaining sufficiently good documentation has been found to be impractical in many organizations.

In this talk, I will discuss techniques to aid software engineers in understanding software systems. I will focus on the techniques that permit the understanding of system behaviour through tracing. Tracing encompasses three main steps: run the system, observe what it does, and make sense of how and why it does it in a certain way. The major challenge is that typical traces can be overwhelmingly large, often millions of lines long. I will present a set of techniques that we have developed to simplify the analysis of large traces. I will report on lessons learned from industrial projects in which my research lab has played a leading role. I will conclude my talk with an outlook on future challenges and research directions.

Abdelwahab Hamou-Lhadj
Information Retrieval and Social Media

The social Web (Web 2.0) changed the way people communicate, now a large number of online tools and platforms, such as participative encyclopedias (e.g., wikipedia.org), social bookmarking platforms (e.g., connotea.org from the Nature Publishing Group), public debate platforms (e.g., agoravox.fr), photo sharing platforms (e.g., flickr.com), micro blogging platforms (e.g., blogger.com, twitter.com), allow people to interact and to share contents. These tools provide to users the ability to express their opinions, to share content (photos, blog posts, videos, bookmarks, etc.); to connect with other users, either directly or via common interests often reflected by shared content; to add free-text tags or keywords to content; users comment on content items. All these user-generated contents need not only to be indexed and searched in effective and scalable ways, but they also provide a huge number of meaningful data, metadata that can be used as clues of evidences in a number of tasks related particularly to information retrieval. Indeed, these user-generated contents have several interesting properties, such as diversity, coverage and popularity that can be used as wisdom of crowds in search process. This talk will provide an overview of this research field. We particularly describe some properties and specificities of these data, some tasks that handle these data, we especially focus on two tasks namely searching in social media (ranking models for social IR, (micro)blog search, forum search, real time social search) and exploiting social data to improve a search.

Mohand Boughanem
Machine Learning Tool for Automatic ASA Detection

The application of machine learning tools has shown its advantages in medical aided decision. This paper presents the implementation of three supervised learning algorithms: the C4.5 decision tree classifier, the Support Vector Machines (SVM) and the Multilayer Perceptron MLP’s in MATLAB environment, on the preoperative assessment database. The classification models were trained using a new database collected from 898 patients, each of whom being represented by 17 features and included in one among 4 classes. The patients in this database were selected from different private clinics and hospitals of western Algeria.In this paper, the proposed system is devoted to the automatic detection of some typical features corresponding to the American Society of Anesthesiologists sores (ASA scores). These characteristics are widely used by all Doctors Specialized in Anesthesia (DSA’s) in pre-anesthesia examinations. Moreover, the robustness of our system was evaluated using a 10-fold cross-validation method and the results of the three proposed classifiers were compared.

Mohammed El Amine Lazouni, Mostafa El Habib Daho, Nesma Settouti, Mohammed Amine Chikh, Saïd Mahmoudi
Stator Faults Detection and Diagnosis in Reactor Coolant Pump Using Kohonen Self-organizing Map

Nuclear power industries have increasing interest in using fault detection and diagnosis (FDD) methods to improve availability, reliability, and safety of nuclear power plants (NPP). In this paper, a procedure for stator fault detection and severity evaluation on reactor coolant pump (RCP) driven by induction motor is presented. Fault detection system is performed using unsupervised artificial neural networks: the so-called Self-Organizing Maps (SOM). Induction motor stator currents are measured, recorded, and used for feature extraction using Park transform, Zero crossing times signal, and the envelope, then statistical features are calculated from each signal which serves for feeding the neural network, in order to perform the fault diagnosis. This network is trained and validated on experimental data gathered from a three-phase squirrelcage induction motor. It is demonstrated that the strategy is able to correctly identify the stator fault and safe cases. The system is also able to estimate the extent of the stator faults.

Smail Haroun, Amirouche Nait Seghir, Said Touati
A New Approach for the Extraction of Moving Objects

We propose in this paper a background subtraction system for image sequences extracted from fixed camera using Gaussian Mixture Models (GMM) and the analysis of color histograms. This system can achieve best accuracy than a simple GMM while maintaining the same computational resources. Images extracted from the video will first be divided into several areas of equal size where the behavior of each area is monitored by the analysis of color histograms. For each new frame the color histograms of the zones will be calculated and parts reported to have significant variation in histogram will be updated at the background model. Test carried out show that this approach present best results than a simple GMM. This improvement is important for processing in real time environment.

Farou Brahim, Seridi Hamid, Akdag Herman
Seeking for High Level Lexical Association in Texts

Searching information in a huge amount of data can be a difficult task. To support this task several strategies are used. Classification of data and labeling are two of these strategies. Used separately each of these strategies have certain limitations. Algorithms used to support the process of automated classification influence the result. In addition, many noisy classes can be generated. On the other hand, labeling of document can help recall but it can be time consuming to find metadata. This paper presents a method that exploits the notion of association rules and maximal association rules, in order to assist textual data processing, these two strategies are combined.

Ismaïl Biskri, Louis Rompré, Christophe Jouis, Abdelghani Achouri, Steve Descoteaux, Boucif Amar Bensaber
Statistical and Constraint Programming Approaches for Parameter Elicitation in Lexicographic Ordering

In this paper, we propose statistical and constraint programming approaches in order to tackle the parameter elicitation problem for the lexicographic ordering (LO) method. Like all multicriteria optimization methods, the LO method have a parameter that should be fixed carefully, either to determine the optimal solution (best tradeoff), or to rank the set of feasible solutions (alternatives). Unfortunately, the criteria usually conflict with each other, and thus, it is unlikely to find a convenient parameter for which the obtained solution will perform best for all criteria. This is why elicitation methods have been populated in order to assist the Decision Maker (DM) in the hard task of fixing the parameters. Our proposed approaches require some prior knowledge that the DM can give straightforwardly. These informations are used in order to get automatically the appropriate parameters. We also present a relevant numerical experimentations, showing the effectiveness of our approaches in solving the elicitation problem.

Noureddine Aribi, Yahia Lebbah
Some Global Measures for Shape Retrieval

In this paper, we propose an efficient shape retrieval method. The idea is very simplistic, it is based on two global measures, the ellipse fitting and the minimum area rectangle. In this approach we don’t need any information about the shape structure or its boundary form, as in most shape matching methods, we have only to compute the relativity between the surface of the shape and both of the minimum area rectangle encompassing it and its ellipse fitting. The proposed method is invariant to similarity transformations (translation, isotropic scaling and rotation). In addition, the matching gives satisfying results with minimal cost. The retrieval performance is illustrated using the MPEG-7 shape database.

Saliha Bouagar, Slimane Larabi
Clustering with Probabilistic Topic Models on Arabic Texts

Recently, probabilistic topic models such as LDA (Latent Dirichlet Allocation) have been widely used for applications in many text mining tasks such as retrieval, summarization, and clustering on different languages. In this paper we present a first comparative study between LDA and K-means, two well-known methods respectively in topics identification and clustering applied on Arabic texts. Our aim is to compare the influence of morpho-syntactic characteristics of Arabic language on performance of first method compared to the second one. In order to study different aspects of those methods the study is conducted on benchmark document collection in which the quality of clustering was measured by the use of two well-known evaluation measure, F-measure and Entropy. The results consistently show that LDA perform best results more than K-means in most cases.

Abdessalem Kelaiaia, Hayet Farida Merouani
Generating GCIs Axioms from Objects Descriptions in $\mathcal{EL}$ -Description Logics

Description Logic are well appropriate for knowledge representation. In such a case, intensional knowledge of a given domain is represented in the form of a terminology (TBox) which declares general properties of concepts relevant to the domain. The terminological axioms which are used to describe the objects of the considered domain are usually manually entered. Such an operation being tiresome, Formal Concept Analysis (FCA) has been already used for the automatic learning of terminological axioms from object descriptions (i.e. from concept instances). However, in all existing approaches, induced terminological axioms are exclusively restricted to the conjunctive form, that is, the existential constructor (∃

r.C

) is not allowed. In this paper, we propose a more general approach that allows to learn existentially quantified general concept inclusion (GCIs) axioms from object descriptions given as assertions in the

$\mathcal{EL}$

language.

Zina Ait-Yakoub, Yassine Djouadi
Automatic Phonetization of Arabic Text

We present in this paper a system for automatic phonetization of text dedicated to the standard Arabic language. The general methodologies, as well as technical details are given. The phonetic transcription constitutes a fundamental step for the development of any text to speech system. The system developed here consists of two stages. In the first stage are processed the exceptions based on an Arabic language exception database. The second step consists in processing the remaining text content by using phonetic transcription rules we have elaborated. Validation is performed through phonetic transcription of a well established corpus of standard Arabic text.

Fayçal Imedjdouben, Amrane Houacine
A Novel Region Growing Segmentation Algorithm for Mass Extraction in Mammograms

This article presents an automatic mass extraction approach by application of a novel region growing algorithm. The region-growing process is guided by regional features analysis consequently; the result will be a robust algorithm able of respecting various image characteristics. The evaluation of the proposed approach was carried out on all MiniMIAS database mammograms containing circumscribed lesions. All masses from various characters of background tissues are well detected.

Ahlem Melouah
Coverage Enhancement in Wireless Video-Based Sensor Networks with Rotating Capabilities

Video monitoring for infrastructure surveillance and control is a special case of wireless sensor network applications in which large amounts of data are sensed and processed in real-time, and then communicated over a wireless network called a Wireless Video Sensor Network (WVSN). Effectively managing of such as networks is a major challenge. Considering that, a critical issue is the quality of the deployment from the sensing coverage viewpoint. Area coverage is still an essential issue in a WVSN. In this paper, we study the area coverage problem in WVSN with rotatable sensors, which can switch to the best direction to get high coverage. After the video sensors are randomly deployed, each sensor calculates its next new direction to switch in order to obtain a better coverage than previous one. In order to extend the network lifetime, our approach is to define a subset of the deployed nodes called a cover set to be active while the other nodes can sleep. Our proposed scheme is implemented and tested in OMNeT++ and shows significant enhancement in terms of average percentage of coverage.

Nawel Bendimerad, Bouabdellah Kechar
Improving Wireless Sensor Networks Robustness through Multi-level Fault Tolerant Routing Protocol

Wireless Sensor Networks (WSN) are now commonly admitted as a promising networking paradigm bringing huge benefits in industrial and socio-economical domains. However, although the applications dedicated to WSNs are in constant increase and research activities are intensely carried on, WSNs remain hindered by some weaknesses and lacks preventing them to reach their full maturity. These lacks may be seen even in basic WSN services like data routing in which energy optimization, data security, fault tolerant nodes, etc, have to be improved. So, we are dealing with the fault tolerance issue of routing protocols in order to dependably ensure data forwarding from nodes to sink. This is indispensable for a WSN to successfully accomplish its mission.

In this paper, we propose an energy economical and Fault Tolerant Multilevel Routing Protocol, borrowing idea from the well known TEEN and LEACH protocols. Simulation results via NS2 simulator showed convincing and interesting protocol performances.

Zibouda Aliouat, Makhlouf Aliouat
A New Approach for QCL-Based Alert Correlation Process

Intrusion Detection Systems (IDS) are very important tools for network monitoring. However, they often produce a large quantity of alerts. The security operator who analyses IDS alerts is quickly overwhelmed. Alert correlation is a process applied to the IDS alerts in order to reduce their number. In this paper, we propose a new approach for logical based alert correlation which integrates the security operator’s knowledge and preferences in order to present to him only the most suitable alerts. The representation and the reasoning on these knowledge and preferences are done using a new logic called Instantiated First Order Qualitative Choice Logic (IFO-QCL). Our modeling shows an alert as an interpretation which allows us to have an efficient algorithm that performs the correlation process in a polynomial time. Experimental results are achieved on data collected from a real system monitoring. The result is a set of stratified alerts satisfying the operators criteria.

Lydia Bouzar-Benlabiod, Salem Benferhat, Thouraya Bouabana-Tebibel
A Pragmatic and Scalable Solution for Free Riding Problem in Peer to Peer Networks

Peer-to-Peer (P2P) systems are increasingly popular as an alternative to the traditional client-server model for many applications, particularly file sharing. Free-riding is one of the most serious problems encountered in Peer-to-peer (P2P) systems like BitTorrent.

Free riding is a serious problem in P2P networks for both Internet service providers (ISPs) and users. To users, the response time of query increases significantly because a few altruistic peers have to handle too many requests. In P2P file sharing systems, free-riders who use others’ resources without sharing their own cause system-wide performance degradation. Therefore, the rational P2P application developers should develop some anti-free riding measures to deal with it.

In this paper, we propose a pragmatic and scalable peer to peer scheme based on a utility function that uses two parameters: collaboration of peers and detection of tentative of free riders. Performance evaluation of the proposed solution shows that results are globally satisfactory.

Mourad Amad, Djamil Aïssani, Ahmed Meddahi, Abdelmalek Boudries
Cooperative Strategy to Secure Mobile P2P Network

Mobile peer-to-peer networking (MP2P) is a relatively new paradigm compared to other wireless networks. In the last years, it has gained popularity because of its practice in applications such as file sharing over Internet in a decentralized manner. Security of mobile P2P networks represents an open research topic and a main challenge regarding to their vulnerability and convenience to different security attacks, such as black hole, Sybil...etc. In this paper, we analyze the black hole attack in mobile wireless P2P networks using AODV as routing protocol. In a black hole attack, a malicious node assumes the identity of a legitimate node, by creating forged answers with a higher sequence number, and thus forces the victim node to choose it as relay. We propose a solution based on a modification of the well-known AODV routing protocol and taking into account the behavior of each node participating in the network. Performances of our proposal are evaluated by simulation.

Houda Hafi, Azeddine Bilami
An Efficient Palmprint Identification System Using Multispectral and Hyperspectral Imaging

Ensuring the security of individuals is becoming an increasingly important problem in a variety of applications. Biometrics technology that relies on the physical and/or behavior human characteristics is capable of providing the necessary security over the standard forms of identification. Palmprint recognition is a relatively new one. Almost all the current palmprint-recognition systems are mainly based on image captured under visible light. However, multispectral and hyperspectral imaging have been recently used to improve the performance of palmprint identification. In this paper, the MultiSpectral Palmprint (

MSP

) and HyperSpectral Palmprint (

HSP

) are integrated in order to construct an efficient multimodal biometric system. The observation vector is based on Principal Components Analysis (

PCA

). Subsequently, HiddenMarkov Model (

HMM

) is used for modeling this vector. The proposed scheme is tested and evaluated using 350 users. Our experimental results show the effectiveness and reliability of the proposed system, which brings high identification accuracy rate.

Abdallah Meraoumia, Salim Chitroub, Ahmed Bouridane
RDAP: Requested Data Accessibility Protocol for Vehicular Sensor Networks

Vehicular Sensor Networks (VSNs) are an emerging paradigm in vehicular networks. This new technology uses different kind of sensing devices available in vehicles, to gather information in order to provide safer, efficient and comfort for roads users. One of the VSNs challenges is how to deal with dynamic data collection. To achieve this, an efficient collaboration between sensors and vehicles is required. This paper proposes a new multi-hop data collection and dissemination scheme based on data replication on VSNs in an urban scenario. The aim of our proposal scheme is to achieve a high accessibility to a requested data while maintaining a low level of channel utilization. The simulation results show that this protocol can achieve significant performance benefits.

Mansour Louiza, Moussaoui Samira
Improvement of LEACH for Fault-Tolerance in Sensor Networks

In wireless sensor networks, failures occur due to energy depletion, environmental hazards, hardware failure, communication link errors, etc. These failures could prevent them to accomplish their tasks. Moreover, most routing protocols are designed for ideal environment such as LEACH. Hence, if nodes fail the performance of these protocols degrade. In this context, we propose two improved versions of LEACH so that it becomes a fault-tolerant protocol. In the first version, we propose a clustered architecture for LEACH in which there are two cluster-heads in each cluster: one is primary (CHp) and the other is secondary (CHs). In the second version, we propose to use the checkpoint technique. Finally, we conducted several simulations to illustrate the performance of our contribution and compared obtained results to LEACH protocol in a realistic environment.

Mohamed Lehsaini, Herve Guyennet
Locally Distributed Handover Decision Making for Seamless Connectivity in Multihomed Moving Networks

Providing seamless connectivity emerges nowadays as a key topic in internet mobility management. NEMO BS Protocol was recently proposed by IETF for managing mobility of moving networks. The protocol is based on hard handover (Brake-Before-Make). The results of NEMO handover analyzes in literature prove that, the support of mobility such as it is defined, does not make it possible to ensure seamless connectivity. To overcome limitations of NEMO BS protocol, several optimizations were proposed. The solutions suggested are based on the optimization of each component of the handover using cross-layer design, network assistance and multihoming. Unfortunately, these optimizations remain insufficient to meet the needs for the applications to critical performance such as handover delay and packet loss. This paper proposes an intelligent support to manage mobility in a locally distributed manner for a multihomed moving network. The task is distributed between mobile routers who cooperatively perform the decision making and execution of the handover based on information gathered by MIH services. NS2 Simulations experiments were investigated to validate the proposed model. The results show that the solution provides excellent performance meeting the needs of time-sensitive applications.

Zohra Slimane, Abdelhafid Abdelmalek, Mohammed Feham
A New Hybrid Authentication Protocol to Secure Data Communications in Mobile Networks

The growing area of lightweight devices, such as mobile cell phones, PDA conduct to the rapid growth of mobile networks, they are playing important role in everyone’s day. Mobile Networks offer unrestricted mobility and tender important services like M-business, M-Learning, where, such services need to keep security of data as a top concern. The root cause behind the eavesdroppers in these networks is the un-authentication. Designing authentication protocol for mobile networks is a challenging task, because, mobile device’s memory, processing power, bandwidths are limited and constrained. Cryptography is the important technique to identify the authenticity in mobile networks. The authentication schemes for this networks use symmetric or asymmetric mechanisms. In this paper, we propose a hybrid authentication protocol that is based on Elliptic Curve Cryptography which is, actually, the suitable technique for mobile devices because of its small key size and high security.

Mouchira Bensari, Azeddine Bilami
Mapping System for Merging Ontologies

In this paper we present a new mapping system for merging OWL ontologies. This work is situated in the general context of stored information heterogeneity in a decisional system such as data, metadata and knowledge, for cohabitation and reconciliation of these information by mediation. Our Mapping approach focuses on computing semantic similarity between concepts of ontologies to merge, it is based on a weighted combination of computing similarity methods, we use syntactic, lexical, structural, and semantic technics. The proposed mapping process makes use of several types of information in a manner that increases the mapping accuracy.

Messaouda Fareh, Omar Boussaid, Rachid Chalal
80 Gb/s WDM Communication System Based on Spectral Slicing of Continuum Generating by Chirped Pulse Propagation in Law Normal Dispersion Photonic Crystal Fiber

In this paper, the combination of the initial positive chirp parameter of picosecond pulse, and the high non-linearity and low normal dispersion of photonic crystal fiber is studied to optimize the spectral flatness of continuum source for WDM system application. This continuum source is defined on the C-band of optical communication window and is capable of providing all the necessary channels after spectral slicing by an optical demultiplexer. If the initial source is delivered at repetition rate of 10 GHz, the obtained continuum allows to generate more than 32 channels spaced 100 GHz all centered at 1550 nm, where each channel is a pulses train has the same repetition rate than the initial source and suitable to modulate with modulation rate of 10 GHz to achieve data transfer rate of 10 Gb/s, the type of modulation format RZ-OOK was used for channels coding. In this context, the transmission chain of WDM communication system at 8.10 Gb/s is demonstrated.

Leila Graini, Kaddour Saouchi
Fuzzy Logic Based Utility Function for Context-Aware Adaptation Planning

Context-aware applications require an adaptation phase to adapt to the user context. Utility functions or rules are most often used to make the adaptation planning or decision. In context-aware service based applications, context and Quality of Service (QoS) parameters should be compared to make adaptation decision. This comparison makes it difficult to create an analytical utility function. In this paper, we propose a fuzzy rules based utility function for adaptation planning. The large number of QoS and context parameters causes rule explosion problem. To reduce the number of rules and the processing time, a rules-utility function can be defined by a hierarchical fuzzy system. The proposed approach is validated by augmenting the MUSIC middleware with a fuzzy rules based utility function. Simulation results show the effectiveness of the proposed approach.

Mounir Beggas, Lionel Médini, Frederique Laforest, Mohamed Tayeb Laskri
Social Validation of Learning Objects in Online Communities of Practice Using Semantic and Machine Learning Techniques

The present paper introduces an original approach for the validation of learning objects (LOs) within an online Community of Practice (CoP). A social validation has been proposed based on two features: (1) the members’ assessments, which we have formalized semantically, and (2) an expertise-based learning approach, applying a machine learning technique. As a first step, we have chosen Neural Networks because of their efficiency in complex problem solving. An experimental study of the developed prototype has been conducted and preliminary tests and experimentations show that the results are significant.

Lamia Berkani, Lydia Nahla Driff, Ahmed Guessoum
Modelling Mobile Object Activities Based on Trajectory Ontology Rules Considering Spatial Relationship Rules

Several applications use devices and capture systems to record trajectories of mobile objects. To exploit these raw trajectories, we need to enhance them with semantic information. Temporal, spatial and domain related information are fundamental sources used to upgrade trajectories. The objective of semantic trajectories is to help users validating and acquiring more knowledge about mobile objects. In particular, temporal and spatial analysis of semantic trajectories is very important to understand the mobile object behaviour. This article proposes an ontology based modelling approach for semantic trajectories. This approach considers different and independent sources of knowledge represented by domain and spatial ontologies. The domain ontology represents mobile object activities as a set of rules. The spatial ontology represents spatial relationships as a set of rules. To achieve this approach, we need an integration between trajectory and spatial ontologies.

Rouaa Wannous, Jamal Malki, Alain Bouju, Cécile Vincent
A Model-Based on Role for Software Product-Line Evolving Variability

Modeling evolving variability has always been a challenge for software Product line developers. Indeed, the most recent approaches discuss the problem with the architecture aspect through languages or models. Despite the contributions of these approaches, they have not discussed the possibility to represent the evolving Product line variability with the current UML role given that the latter was designed for a single software system. In this paper, we focused on the use of the concept of evolving role resulting from the adaptation of UML role to represent the evolving variability in the software product line.

Yacine Djebar, Nouredine Guersi, Mohamed Tahar Kimour
An Approach for the Reuse of Learning Annotations Based on Ontology Techniques

This article presents an approach for the reuse of learning annotations based on ontology alignment. We couple some Knowledge Management’s con- cepts (as knowledge and collective memories) to the specific area of e-learning. Our approach of learning annotation reuse lies essentially on a similarity be- tween the two ontologies of context and annotation respectively to the subsys- tems of context and annotation of our ontology-based approach for reusing and learning through context-aware annotations memory.

Nadia Aloui, Faïez Gargouri
Very Large Workloads Based Approach to Efficiently Partition Data Warehouses

Horizontal Partitioning (HP) is an optimization technique widely used to improve the physical design of data warehouses. However, the selection of a partitioning schema is an NP-complete problem. Thus, many approaches were proposed to resolve this problem. Nonetheless, the overwhelming majority of these works do not take into account the size of the workload which can be very large. Huge workload increases the time of HP selection algorithms and may deteriorate the quality of final solution. We propose, in this paper, a new approach based on classification and election to select an HP schema in the case of largesized workloads. We conducted an experimental study on the ABP-1 benchmark to test the effectiveness and scalability of our approach.

Gacem Amina, Kamel Boukhalfa
Evaluation of the Influence of Two-Level Clustering with BUB-Trees Indexing on the Optimization of Range Queries

A BUB-tree is an indexing structure based on B-trees and on a Z-order space filling curve, which transforms multidimensional data into a unique key, enabling the use of a mono-attribute index. We propose a two-level indexing structure relying on a partition of the data space into disjoint clusters. At the first level the clusters are indexed by a BUB-tree and at the second level the data of each cluster is itself indexed. Indexing the clusters provides an efficient query optimization because data filtering is performed at the cluster level, which reduces the data transferred from disk to memory. We compare the performance of our approach with single-level BUB-tree indexing on two types of queries: exact match queries and range queries, which play an important role in multidimensional databases, such as Data Warehouses or Geographic Information Systems. Our approach applies to any system supporting a partition of the attribute domains.

Samer Housseno, Ana Simonet, Michel Simonet
Identification of Terrestrial Vegetation by MSG-SEVIRI Radiometer and Follow-Up of Its Temporal Evolution

The main focus of this work is to propose a method to depict one vegetation index from free clouds MSG2-SEVIRI (METEOSAT Second Generation 2 - Spining Enhanced in Visible and Infrared Imager) data. The proposed method uses the multi spectral high satellite data frequency (MSG2- SEVIRI image acquisition every 15 minutes), for rapid identification of Normalized Difference Vegetation Index (NDVI), and follow-up of its temporal evolution, to produce an approved monthly scale vegetation chart for a half terrestrial disk scanned by SEVIRI radiometer. To validate our method, we have compared each obtained result with other performed sensor embided on polar SPOT satellite called SPOT Vegetation.

Naima Benkahla, Abdelatif Hassini
Automatic Detection of Contours of Circular Geologic Structures on Active Remote Sensing Images Using the Gradient Vector Flow Active Contour

We try in this work to solve the problem of automatic detection of contours of circular geologic structures of the Adrar Tikertine (feuille de Tinfelki) on radar remote sensing images. The utility of these structures is irrefutable, particularly in mineral prospecting and geological cartography. To reach this goal, we use an active contour model called Gradient Vector Flow (GVF). With the difference to traditional approaches, the Gradient Vector Flow (GVF) concept includes simultaneously two operations: detection and edge link of contour points. The last one was always considered as a very complicated task in traditional approaches and must be done separately from detection of contour points. In fact, the strong point of the GVF active contours is the definition of new external force able to attract the deformable contour to concave regions, generally not attained with traditional active contours called “snakes”.

Djelloul Mokadem, Abdelmalek Amine
NALD: Nucleic Acids and Ligands Database

The nucleic acids and ligands database (NALD) is concerned with the identification of ligands (drugs) that bind nucleic acids (NA) and provide users with sets of specific information in relation to the binding existing between both molecules. NALD thus annotates nucleic acids in complexes with ligands in terms of detailed binding interactions, binding motifs where binding occurs, binding properties, binding modes & classes and links to diseases may be in association with the ligands. These were calculated from entries of NA/Ligand complexes from the protein data bank (PDB) and also extracted by both automatic and manual means from scientific literature sources such as the PubMed web site (PMID) and publications (in hardcopy form). NALD provides online access to these types of information while it focuses on ligands that bind nucleic acids with implications on diseases of high prevalence in Africa and in particular in Algeria and the Southern African Development Community (SADC) region such as HIV/AIDS, cancer, hepatitis, malaria and tuberculosis. The database offers data integration in the form of links to the PDB, DrugBank, and other resources such as UniProt and PubMed databases. In addition and for those ligands classified as known drugs, drug information extracted from the DrugBank. NALD can be accessed freely from

http://www.bioinformaticstools.org/nald.

Abdelkrim Rachedi, Khuphukile Madida
Maximality-Based Labeled Transition Systems Normal Form

This paper proposes an algorithm (functional method) for reducing Maximality-based Labeled Transition Systems (MLTS) modulo a maximality bisimulation relation. For this purpose, we define a partial order relation on MLTS states according to a given maximality bisimulation relation. We prove that a reduced MLTS is unique. In other word, it provides a normal form.

Adel Benamira, Djamel-Eddine Saïdouni
Automatic Generation of SPL Structurally Valid Products Using Graph Transformations Approach

A Software Product Line is a set of software products that share a number of core properties but also differ in others. Differences and commonalities between products are typically described in terms of features. A software product line is usually modeled with a feature diagram, describing the set of features and specifying the constraints and relationships between these features. Each product is defined as a set of features. In this area of research, a key challenge is to ensure correctness and safety of these products. There is an increasing need for automatic tools that can support feature diagram analysis, particularly with a large number of features that modern software systems may have. In this paper, we propose using model transformations an automatic framework to generate all valid products according to the feature diagram. We first introduce the basic idea behind our approach. Then, we present the used graph grammar to perform automatically this task using the AToM

3

tool. Finally, we show the feasibility of our proposal by means of running example.

Khaled Khalfaoui, Allaoua Chaoui, Cherif Foudil, Elhillali Kerkouche
Modeling On-the-Spot Learning: Storage, Landmarks Weighting Heuristic and Annotation Algorithm

Huge information is intrinsically associated with certain places in the globe such as historical, geographical, cultural and architectural specialties. The next generation systems require access of the site specific information where the user is roaming at the moment. The on-the-spot learning (OTSL) is a system that allows the users to learn about the location, landmarks, regions where he/she is walking through. In this paper, we have proposed an OTSL model that includes the storage, retrieval and the landmark weighting heuristic. Apart from learning about the individual landmarks, we have proposed two ways of storing the spatial learning objects.

First

, use the administrative hierarchy of the region to fetch the information. This can be easily done by the reverse-geocoding operation without actually storing the physical hierarchy.

Second

, spatial chunking, creates the region based on the groups of landmarks in order to define a learning region. A

hybrid

solution has also been considered to achieve the advantages of both the region based methods. We use a weighting model to select the correct landmarks in the basic model. We extend the core model to include other factors such as speed, direction, side of the road etc. A prototype has been implemented to show the feasibility of the proposed model.

Shivendra Tiwari, Saroj Kaushik
Towards an Integrated Specification and Analysis of Functional and Temporal Properties:
Part I: Functional Aspect Verification

Maximality-based Labeled Stochastic Transition Systems

(MLSTS) was presented [6, 11] as a new semantic model for characterizing the functional and performance properties of concurrent systems, under the assumption of arbitrarily distributed (i.e. non-Markovian) durations of actions. The MLSTS models can be automatically generated from S-LOTOS specifications according to the (true concurrency) maximality semantics [6]. The main advantage is to pruning the state graph without loss of information w.r.t. ST-semantic models [11]. As a first work on MLSTS, we focus in this paper on in the verification of functional properties of systems, using a variant of model-checking technique.

Mokdad Arous, Djamel-Eddine Saïdouni
Overhead Control in DP-Fair Work Conserving Real-Time Multiprocessor Scheduling

In real-time multiprocessor scheduling, the optimal global scheduling algorithms are criticized for excessive overhead due to the frequent scheduling points, migrations and preemptions.

DP-Fair

model is an optimal scheduling which combines the notion of fluid scheduling (ideal fairness) with deadline partitioning. It has lower number of scheduling points as compare to

PFair

which is the first optimal scheduling algorithm proposed for real-time multiprocessor systems.

DP-Fair

model exists both for non-work conserving as well as work conserving cases. In [14,15], we used some simple heuristics which lower the overhead by reducing the number of migrations and preemptions in the non-work conserving context. In this article, we show that the very same heuristics can be envisaged in case of work conserving scheduling, and we evaluate their efficiencies for lowering the overhead.

Muhamad Naeem Shehzad, Anne-Marie Déplanche, Yvon Trinquet, Richard Urunuela
Initializing the Tutor Model Using K-Means Algorithm

This paper proposes an approach for the initialization and the construction of tutor’s model in the e-learning systems. This actor has several roles and different tasks from a system to another. His main purpose is tracking and guiding students throughout their learning process. In their first interaction, the system has rather little information about its new tutors. The proposed approach serves to offer much information for each specific tutor based on the models of other similar tutors. The problem of initializing the tutor model can be resolved by assigning the tutor to certain group of tutors. Thus, a data mining algorithm, namely k-means is responsible for creating clusters based on the preentered information on tutors. Then, each new tutor is assigned to his closest cluster center. This model facilitates the assignment of tutors to learners for adapting the monitoring process.

Safia Bendjebar, Yacine Lafifi
Towards a Generic Reconfigurable Framework for Self-adaptation of Distributed Component-Based Application

Software is moving towards evolutionary architectures that are able to easily accommodate changes and integrate new functionality. This is important in a wide range of applications, from plugin-based end user applications to critical applications with high availability requirements. This work presents a component based framework that allows introducing adaptability to the distributed component-based applications. The framework itself is reconfigurable and it based on the classical autonomic control loop Mape-k (Monitoring, Analysis, Planning, and Execution). The paper introduces a prototype framework implementation and its empirical evaluation that shows encouraging results.

Ouanes Aissaoui, Fadila Atil, Abdelkrim Amirat
Dynamic Bayesian Networks in Dynamic Reliability and Proposition of a Generic Method for Dynamic Reliability Estimation

In this paper, we review briefly the different works published in the field of Dynamic Bayesian Network (DBN) reliability analyses and estimation, and we propose to use DBNs as a tool of knowledge extraction for constructing DBN models modeling the reliability of systems. This is doing, by exploiting the data of (tests or experiences feedback) taken from the history of the latter’s. The built model is used for estimating the system reliability via the inference mechanism of DBNs. The proposed approach has been validated using known system examples taken from the literature.

Fatma Zohra Zahra, Saliha Khouas-Oukid, Yasmina Assoul-Semmar
Semantic Annotations and Context Reasoning to Enhance Knowledge Reuse in e-Learning

We address in this paper the need of improving knowledge reusability within online Communities of Practice of E-learning (CoPEs). Our approach is based on contextual semantic annotations. An ontological-based contextual semantic annotation model is presented. The model serves as the basis for implementing a context aware annotation system called “CoPEAnnot”. Ontological and rule-based context reasoning contribute to improving knowledge reuse by adapting CoPEAnnot’s search results, navigation and recommendation.The proposal has been experimented within a community of learners.

Souâad Boudebza, Lamia Berkani, Faiçal Azouaou, Omar Nouali
Change Impact Study by Bayesian Networks

The study of change impact is a fundamental activity in software engineering because it can be used to plan changes, set them up and to predict or detect their effects on the system and try to reduce them. Various methods have been presented in the literature for this sector of maintenance. The objective of this project is to improve the maintenance of Object Oriented (OO) systems and to intervene more specifically in the task of analyzing and predicting the change impact. Among several models of representation, Bayesian Networks (BNs) constitute a particular quantitative approach that can integrate uncertainty in reasoning and offering explanations close to reality. Furthermore, with the BNs, it is also possible to use expert judgments to anticipate the predictions, about the change impact in our case. In this paper, we propose a probabilistic approach to determine the change impact in OO systems. This prediction is given in a form of probability.

Chahira Cherif, Mustapha Kamel Abdi
Backmatter
Metadaten
Titel
Modeling Approaches and Algorithms for Advanced Computer Applications
herausgegeben von
Abdelmalek Amine
Ait Mohamed Otmane
Ladjel Bellatreche
Copyright-Jahr
2013
Electronic ISBN
978-3-319-00560-7
Print ISBN
978-3-319-00559-1
DOI
https://doi.org/10.1007/978-3-319-00560-7

Premium Partner