Skip to main content

2011 | Buch

Digital Information and Communication Technology and Its Applications

International Conference, DICTAP 2011, Dijon, France, June 21-23, 2011, Proceedings, Part II

herausgegeben von: Hocine Cherifi, Jasni Mohamad Zain, Eyas El-Qawasmeh

Verlag: Springer Berlin Heidelberg

Buchreihe : Communications in Computer and Information Science

insite
SUCHEN

Über dieses Buch

This two-volume set CCIS 166 and 167 constitutes the refereed proceedings of the International Conference on Digital Information and Communication Technology and its Applications, DICTAP 2011, held in Dijon, France, in June 2010. The 128 revised full papers presented in both volumes were carefully reviewed and selected from 330 submissions. The papers are organized in topical sections on Web applications; image processing; visual interfaces and user experience; network security; ad hoc network; cloud computing; Data Compression; Software Engineering; Networking and Mobiles; Distributed and Parallel processing; social networks; ontology; algorithms; multimedia; e-learning; interactive environments and emergent technologies for e-learning; signal processing; information and data management.

Inhaltsverzeichnis

Frontmatter

Software Engineering

Ontology Development as a Software Engineering Procedure

The article is based on the research project “Network Enabled Capability Knowledge Management of the Army Czech Republic”, which the first project is dealing with the Knowledge Management at the Czech military. The theoretical basis of the project is Topic Maps. The key issue for the project solution is designing and creating a suitable ontology. The paper describes the Software Engineering procedure from the selection of an Upper Ontology through the Core Ontology design to the processing of the Domain Ontology. Ontology definitions are stated and their meaning is explained. Ontology design was evaluated.

Ladislav Burita
An IPv6 Test Bed for Mobility Support in Next Generation Wireless Networks

This paper extends our work in developing network intelligence via agents for improved mobile Quality of Service in next generation wireless networks. A test bed architecture of our protocol, called AMP, has been built to facilitate and expedite location and handover management over IPv6 networks. AMP comprises a collaborative multi-agent system residing in the mobile node and access networks. The core IP network has remained untouched to simplify design and operations. AMP’s performance was evaluated against the IETF’s standard Mobile IPv6 protocol in support of roaming mobile nodes. Results from analyses indicate that AMP outperformed Mobile IP with lower signaling cost, latency and packet loss. Our work shows that with AMP, an improved IP-based mobility support may be achieved through added intelligence without increased complexity in the core network. Furthermore, results suggest that AMP may be more suited for micro-mobility and may serve as a viable and promising alternative to Mobile IPv6 in facilitating Internet-based host mobility in next generation wireless networks.

Wan H. Hassan, Ahmed M. Al-Imam
Test Management Traceability Model to Support Software Testing Documentation

Software Documentation is one of the key quality factors in software development. However, many developers are still putting less effort and less priority on documentation. To them writing documentation during project development is very tedious and time consuming. As a result, the documentation tends to be significantly out-dated, poor quality and difficult to access that will certainly lead to poor software maintenance. Current studies have proved that the key point to this problem is software traceability. Traceability relates to an ability to trace all related software components within a software system that includes requirements, test cases, test results and other artefacts. This research reveals some issues related to current software traceability and attempts to suggest a new software traceability model that focuses on software test documentation for test management. This effort leads to a new software test documentation generation process model based on software engineering standards.

Azri Azmi, Suhaimi Ibrahim
Software Maintenance Testing Approaches to Support Test Case Changes – A Review

Software Maintenance Testing is essential during software testing phase. All defects found during testing must undergo a re-test process in order to eliminate the flaws. By doing so, test cases are absolutely needed to evolve and change accordingly. In this paper, several maintenance testing approaches namely regression test suite approach, heuristic based approach, keyword based approach, GUI based approach and model based approach are evaluated based on software evolution taxonomy framework. Some of the discussed approaches support changes of test cases. Out of the review study, a couple of results are postulated and highlighted including the limitation of the existing approaches.

Othman Mohd Yusop, Suhaimi Ibrahim
Measuring Understandability of Aspect-Oriented Code

Software maintainability is one of important factors that developers should concern because two-thirds of a software system’s lifetime-cost involve maintenance. Understandability is one of sub-characteristics that can describe software maintainability. Aspect-oriented programming (AOP) is an alternative software development paradigm that aims to increase understandability, adaptability, and reusability. It focuses on crosscutting concerns by introducing a modular unit, called “aspect”. Based on the definition of understandability that “the related attributes of software components that users have to put their effort on to recognizing the logical concept of the components”, this paper proposes seven metrics for evaluating understandability of aspect-oriented code using different levels of dependence graphs. The metrics are applied to two versions of aspect-oriented programs to give an illustration.

Mathupayas Thongmak, Pornsiri Muenchaisri
Aspect Oriented and Component Based Model Driven Architecture

This paper presents a methodology for an efficient development of software for various platforms by using the concept of aspects, components and model driven architecture. In the proposed methodology the analysis of the whole software is performed so as to separate aspects and identify independent components. Then comes the designing phase where the modules identified in the previous phase are modeled without any correlation of the platform on which it has to be build using UML. Then comes the third phase in which different platform specific models and the code artifacts are generated from these independent models. This separates all the concerns from the business logic and provides a proper understanding of different modules that are to be developed well before the start of the coding phase thereby reducing the burden of artifact design from the developers.

Rachit Mohan Garg, Deepak Dahiya, Ankit Tyagi, Pranav Hundoo, Raghvi Behl
Engineering the Development Process for User Interfaces: Toward Improving Usability of Mobile Applications

Mobile applications have a great proliferation nowadays. Their usage varies from personal applications to enterprise systems. Even though there has been proliferation, the development of mobile applications confronts some limitations and faces particular challenges. The usability is one of the main domains to attend in such systems. To improve this quality attribute we suggest incorporating to the software development process best practices from other disciplines, such as usability engineering and human-computer interaction. On the other hand, it’s important to incorporate studies that assist to identify requirements of each mobile device capability to offer services in usable manner. In this paper we present a proposal to apply user oriented analysis and design, emphasizing specific practices, such as user and task analysis. Also, we present a case of study consisting in a mobile application for the iPhone device, which allows us to prove our proposal.

Reyes Juárez-Ramírez, Guillermo Licea, Itzel Barriba, Víctor Izquierdo, Alfonso Ángeles
Lifelong Automated Testing: A Collaborative Framework for Checking Industrial Products Along Their Lifecycle

With more and more intelligence and cutting edge applications embedded, the need for the accurate validation of industrial products from early stage of conception to commercial releases has become crucial. We present a new approach for automating the validation of critical and complex systems, in which tests sequences can be at first derived both from the modeling of the system under test and the properties to check, and then easily updated according to evolutions of the product or the usage requirements. The approach is being proofed in validating high-class sensor prototypes for the automotive industry, which in particular illustrates how operational issues related to the different level of abstraction between textual specifications and effective test routines can be solved.

Thomas Tamisier, Fernand Feltz
A Comparative Analysis and Simulation of Semicompetitive Neural Networks in Detecting the Functional Gastrointestinal Disorder

The stomach has a complex physiology, where physical, biological and psychological parameters take part in, thus it is difficult to understand its behavior and function in normal and functional gastrointestinal disorders (FGD). In the area of competitive learning, a large number of models exist which have similar goals but considerably they are different in the way they work. A common goal of these algorithms is to distribute specified number of vectors in a high dimensional space. In this paper several methods related to competitive learning, have been examined by describing and simulating different data distribution. A qualitative comparison of these methods has been performed in the processing of gastric electrical activity (GEA) signal and classifying GEA types to discriminate two GEA types: normal and FGD. The GEA signals are first decomposed into components in different sub-bands using discrete wavelet transformation.

Yasaman Zandi Mehran, Mona Nafari, Nazanin Zandi Mehran, Alireza Nafari

Networking and Mobiles

Individuals Identification Using Tooth Structure

The use of automated biometrics-based personal identification systems is an omnipresent procedure. Many technologies are no more secure, and they have certain limitations such as in cases when bodies are decomposed or burned. Dental enamel is one of the most mineralized tissues of an organism that have a post-mortem degradation resistance. In this article we describe the dental biometrics which utilizes dental radiographs for human identification. The dental radiographs provide information about teeth, including tooth contours, relative positions of neighboring teeth, and shapes of the dental work (crowns, fillings, and bridges). Then we propose a new system for the dental biometry that consists of three main stages: segmentation, features extraction and matching. The features extraction stage uses grayscale transformation to enhance the image contrast and a mixture of morphological operations to segment the dental work. The matching stage consists of the edge and the dental work comparison.

Charbel Fares, Mireille Feghali, Emilie Mouchantaf
Modeling Interpolated Distance Error for Clutter Based Location Estimation of Wireless Nodes

This research work is focusing on the location estimation by using interpolated distance calculation and its error in wireless nodes based on different clutters/terrains. This research is based on our previous findings in which we divided the clutter/terrain into thirteen different groups. As radio waves behave differently in different terrains therefore we recorded data points in all different terrains. C# program was used with WiFi (IEEE 802.11) prototype to record the behavioral changes in radio signals because of atmospheric attenuation. We took readings in all different clutters at a regular interval of 10 meters (from 10m to 150m). In current research, we are using linear interpolation to calculate the distances other than regular interval with minimal error rate. First, we calculate actual distance based on receive radio signals at random and then compare it with our interpolated distance and calculate the error. We take a sample size of four in each terrain and divide the interpolated data in six zones for accuracy. This comparison table helps to estimate the location of wireless node in different terrains/clutters with the error rate of as minimum as 0.5 meter.

Muhammad Alam, Mazliham Mohd Su’ud, Patrice Boursier, Shahrulniza Musa
Mobile and Wireless Access in Video Surveillance System

Wireless communication and mobile technologies are already well established in modern surveillance systems. Mobile-based client applications are commonly used to provide the basic access to camera video streams and other system resources. Camera site devices might connect to the system core by wireless links to address/overcome the environmental conditions. Finally, the surveillance systems themselves can be installed in portable environments such as busses or trains, which require wireless access for infrastructure and internet services, etc. However, we observe the growing popularity of mobile and wireless access solutions. The technology itself is evolving rapidly providing efficient transmission technology, feature-rich and powerful mobile and wireless devices. The users expect to have seamless access and tools where the functionality does not depend on access technologies or access devices. The same functionality is demanded from local client application and remote mobile browser. The aim of this paper is to explore access scenario where mobile and wireless access methods are used to provide enhanced client functionality. We analyze these scenarios, discuss the performance factors.

Aleksandra Karimaa
A Novel Event-Driven QoS-Aware Connection Setup Management Scheme for Optical Networks

This paper proposes a QoS-Aware Optical Connection Setup Management scheme that uses the Earliest Deadline First (EDF) queueing discipline to schedule the setup of the optical connections. The benefits of this EDF-based scheme are twofold:

a

) it reduces the blocking probability since blocked connection requests due to resource unavailability are queued for possible future setup opportunities and

b

) it realizes QoS differentiation by ranking the blocked requests in the EDF queue according to their connection setup time requirements, which are viewed as deadlines during connection provisioning. As such, pending lesser delay-tolerant requests are guaranteed to experience better QoS than the ones having longer setup time requirements. Extensive simulations are performed to gauge the merits of the proposed EDF-based scheme and study its performance in the context of two network scenarios, namely, the National Science Foundation Network (NSFNET) and the European Optical Network (EON).

Wissam Fawaz, Abdelghani Sinno, Raghed Bardan, Maurice Khabbaz
Semantic Data Caching Strategies for Location Dependent Data in Mobile Environments

A new model for semantic caching of location dependent data in mobile environments is proposed. In the proposed model, semantic descriptions are designed to dynamically answer the nearest neighbor queries (NNQ) and range queries (RQ) from the client cache. In order to accurately answer user queries, the concept of partial objects is introduced in this paper. Both NNQ and RQ results are stored as semantic regions in the cache. We apply a cache replacement policy RAAR (Re-entry probability, Area of valid scope, Age, Rate of Access) which takes into account the spatial and temporal parameters. The cache replacement policy which was proposed for NNQ is also applied to evict the cached semantic regions. The experimental evaluations using synthetic datasets show that RAAR is effective in improving the system performance when compared to Least Recently Used (LRU) and Farthest Away Replacement (FAR) replacement policies.

N. Ilayaraja, F. Mary Magdalene Jane, I Thomson, C. Vikram Narayan, R. Nadarajan, Maytham Safar

Distributed and Parallel Processing

PDMRTS: Multiprocessor Real-Time Scheduling Considering Process Distribution in Data Stream Management System

In Data Stream Management Systems (DSMSs), as long as continuous streams of data are arriving in the system, queries are executing on these input data. Regarding high volume of input data, having high processing capacity by using multiple processors is non-negligible. Also, many applications of DSMSs, such as traffic control systems, and health monitoring, have real-time nature. To support these features, this paper aims at developing an efficient multiprocessor real-time DSMS. To achieve efficiency, a multiprocessor real-time scheduling algorithm is proposed based on partitioning approach. In this algorithm, each received query has a chance to fit into any processor with first fit assignment. If it could not fit due to its utilization then that query is broken into some queries with smaller processing capacity based on utilization of processors. We conduct performance studies with real workloads. The experimental results show that the proposed algorithm outperforms the simple partitioning algorithm.

Mehdi Alemi, Ali A. Safaei, Mostafa S. Hagjhoo, Fatemeh Abdi
PFGN: A Hybrid Multiprocessor Real-Time Scheduling Algorithm for Data Stream Management Systems

In many of recent applications data are received as infinite, continuous, rapid and time varying data streams. Real-time processing of queries over such streams is essential in most of the applications. Single processor systems are not capable to provide the desired speed to be real-time. Parallelism over multiprocessors can be used to handle this deficit. In such a system, a multiprocessor real-time scheduling algorithm must be used. Generally, multiprocessor real-time scheduling algorithms fall into two approaches: Partitioning or Global. The partitioning approach has acceptable overhead but can NOT be optimal. The global approach can be but it has considerable overheads.

In this paper, a multiprocessor real-time scheduling algorithm for a DSMS is proposed that employs hybrid approach. It is shown that it is optimal while has minimum overheads. Also, simulation results illustrate that the proposed hybrid multiprocessor real-time scheduling algorithm outperforms algorithms that use either portioning approach or global approach.

Ali A. Safaei, Mostafa S. Haghjoo, Fatemeh Abdi
Detecting Cycles in Graphs Using Parallel Capabilities of GPU

We present an approximation algorithm for detecting the number of cycles in an undirected graph, using CUDA (Compute Unified Device Architecture) technology from NVIDIA and utilizing the massively parallel multi-threaded processor; GPU (Graphics Processing Unit). Although the cycle detection is an NP-complete problem, this work reduces the execution time and the consumption of hardware resources with only a commodity GPU, such that the algorithm makes a substantial difference compared to the serial counterpart. The idea is to convert the cycle detection from adjacency matrix/list view of the graph, applying DFS (Depth First Search) to a mathematical model so that each thread in the GPU will execute a simple computation procedures and a finite number of loops in a polynomial time. The algorithm is composed of two phases, the first phase is to create a unique number of combinations of the cycle length using combinatorial mathematics. The second phase is to approximate the number of swaps (permutations) for each thread to check the possibility of cycle. An experiment was conducted to compare the results of our algorithm with the results of another algorithm based on the Donald Johnson backtracking algorithm.

Fahad Mahdi, Maytham Safar, Khaled Mahdi
Survey of MPI Implementations

High Performance Computing (HPC) provides support to run advanced application programs efficiently. Message Passing Interface (MPI) is a de-facto standard to provide HPC environment in clusters connected over fast interconnect and gigabit LAN. MPI standard itself is architecture neutral and programming language independent. C++ is widely accepted choice for implementing MPI specifications like MPICH and LAM/MPI. Apart from C++ other efforts are also carried out to implement MPI specifications using programming languages such as Java, Python and C#. Moreover MPI implementations for different network layouts such as Grid and peer-to-peer exist as well. With these many implementations providing a wide range of functionalities, programmers and users find it difficult to choose the best option to address a specific problem. This paper provides an in-depth survey of available MPI implementations in different languages and for variety of network layouts. Several assessment parameters are identified to analyze the MPI implementations along with their strengths and weaknesses.

Mehnaz Hafeez, Sajjad Asghar, Usman Ahmad Malik, Adeel ur Rehman, Naveed Riaz
Mathematical Model for Distributed Heap Memory Load Balancing

In this paper, we will introduce a new method to achieve the fair distribution of data among distributed memories in a distributed system. Some processes consume larger amount of heap spaces [1] in its local memory than the others. So, those processes may use the virtual memory and may cause the thrashing problem [2].At the same time, the rest of heap memories in that distributed system remain almost empty without best utilization for heap memories of the whole distributed system. So, a UDP-Based process communication system is defined [3] to make use of remote heap memories. This is done by allocating dynamic data spaces for other processes in other machines. In fact, the increasing use of high-bandwidth and low-latency networks provides the possibility to use the distributed memory as an alternative to disk.

Sami Serhan, Imad Salah, Heba Saadeh, Hamed Abdel-Haq

Social Networks

Effectiveness of Using Integrated Algorithm in Preserving Privacy of Social Network Sites Users

Social Network Sites (SNSs) are one of the most significant and considerable topics that draw researches attention nowadays. Recently, lots of people from different areas, ages and genders join to SNSs and share lots of different information about various things. Spreading information can be harmful for users’ privacy. In this paper, first we describe some definitions of social networks and their privacy threats. Second, after reviewing some related works on privacy we explain how we can minimize the information disclosure to adversaries by using integrated algorithm. Finally we try to show the effectiveness of proposed algorithm in order to protect the users’ privacy. The enhanced security by anonymizing and diversifying disclosed information presents and insures the paper aim.

Sanaz Kavianpour, Zuraini Ismail, Amirhossein Mohtasebi
A Web Data Exchange System for Cooperative Work in CAE

The popular Social Network Service, SNS, induces an open system for collaboration. A data exchange system is opened without any grant as a Computer-supported Cooperative-work tool for Computer-Aided Engineering via Web interface. The SNS plays the role for security instead. SNS is advantageous in recruiting members adequate to the cooperative work, and this open system could be a good workplace under the paradigm of convergence.

Min-hwan Ok, Hyun-seung Jung
Online Social Media in a Disaster Event: Network and Public Participation

In August 2009, Typhoon Morakot attacked southern Taiwan and caused the most tremendous disaster in the decade. In this disaster, netizen used internet tools such as blog, Twitter, and Plurk to transmit great amount of timely information including emergencies, rescue actions and donations. By reviewing electronic documents and interviewing eight major micro-bloggers during that period of time, two major functions of social media are identified: information dissemination and resources mobilization. In sum, three major findings are concluded in this study: (1) micro-blogging applications presented potentials for public participation and engagement in crisis events; (2) these end to end users of blog, Twitter, and Plurk successfully employed collective networking power and played the vital collaborator in this disaster event.; (3) the use of social media as a more efficient disaster backchannel communication mechanism demonstrates the possibility of governmental and public participation collaboration in times of disaster.

Shu-Fen Tseng, Wei-Chu Chen, Chien-Liang Chi
Qualitative Comparison of Community Detection Algorithms

Community detection is a very active field in complex networks analysis, consisting in identifying groups of nodes more densely interconnected relatively to the rest of the network. The existing algorithms are usually tested and compared on real-world and artificial networks, their performance being assessed through some partition similarity measure. However, artificial networks realism can be questioned, and the appropriateness of those measures is not obvious. In this study, we take advantage of recent advances concerning the characterization of community structures to tackle these questions. We first generate networks thanks to the most realistic model available to date. Their analysis reveals they display only some of the properties observed in real-world community structures. We then apply five community detection algorithms on these networks and find out the performance assessed quantitatively does not necessarily agree with a qualitative analysis of the identified communities. It therefore seems both approaches should be applied to perform a relevant comparison of the algorithms.

Günce Keziban Orman, Vincent Labatut, Hocine Cherifi

Ontology

Proof-of-Concept Design of an Ontology-Based Computing Curricula Management System

The management of curricula development activities is time-consuming and labor-intensive. The accelerated nature of Computing technological advances exacerbates the complexity of such activities. A Computing Curricula Management System (CCMS) is proposed as a Proof-of-Concept (POC) design that utilizes an ontology as a knowledge source that interacts with a Curriculum Wiki facility through ontological agents. The POC design exploits agent-interaction models that had already been analyzed through the application of the Conceptualization and Analysis phases of the MASCommonKADS Agent Oriented Methodology. Thereafter, the POC design of the CCMS is developed in the Design phase. The paper concludes with a discussion of the resulting contribution, limitation and future work.

Adelina Tang, Amanullah Abdur Rahman
Automatic Approaches to Ontology Engineering

The paper presents general overview of the issues of ontology engineering, which are presented by processes of ontology development. This area deals with activities in "management" of ontologies, such as ontology matching, ontology alignment, ontology integration, ontology merging, etc. It is based on the two main books for that issue and it tries to cover this topic from the contributions of conferences and from the articles which are available on the Internet. Paper does not provide guidance on how to build ontologies, but rather gives the reader a broad overview of this area without need to study additional resources in depth.

Petr Do
The Heritage Trust Model

Trust management systems has been proposed to address joyless security in open and distributed environments (Web, Semantic Web, Peer-to-peer networks, etc.).

A user usually spent a lot of time by building her reputation and by creating a network of trusted/distrusted users. Without possibility of seamless transfer from one trust management system to another, a user is forced to build a new reputation/network of trusted users again. This problem will become even more severe, as many current systems using trust as a key factor influencing ability to communicate within a group of users will be outdated and some of them even down.

The paper presents a specification of the seamless transfer problem and it also introduces a solution – the Heritage Trust Model based on dynamic graphs and ontologies.

Roman Špánek, Pavel Tyl
Towards Semantically Filtering Web Services Repository

One of the major challenges of the Service Oriented Computing (SoC) paradigm is the Web service discovery. Yet the mechanisms used to match Web services can be improved. However, considering the semantics of the Web service descriptions is a must for automating the process of discovering Web services. We have proposed a semantic-based Web service discovery algorithm that consists of two stages: The registry filtering stage and the descriptions matching stage. The aim of the first stage is to pick the advertisements that are relevant to the client request and ignore, from an early stage and before checking the details of the descriptions, the advertisements that are not able to satisfy the client request. However, the aim of the second stage is to check whether or not the relevant advertisements resulted from the registry filtering stage are actually able to satisfy the client query. This paper concerns the first stage. More precisely, we propose a semantic-based Web service registry filtering mechanism that takes the responsibility of narrowing down the number of Web service descriptions to be checked in detail to the number of only the relevant advertisements to the client request. We also propose an extension to Web Ontology Language for Services (OWL-S). With this extension, filtering the Web service registry based on the semantics of the available descriptions is possible.

Thair Khdour
Ontology for Home Energy Management Domain

This paper focuses on an approach to build ontology for home energy management domain which is compatible with Suggested Upper Merged Ontology (SUMO). Our starting point in doing so was to study general classifications of home electrical appliances provided by various home appliances vendors and manufacturers. Various vendors and manufacturers use their own arbitrary classification instead of using a single standard classification system for home appliances and there exists no uniformity of appliances specifications among these vendors. Although the appliances vendors provide energy efficiency rating of home appliances but they do not provide a detailed specification of the attributes that contributes towards their overall energy consumption. In the absence of these attributes and non existence of a standard ontology it is difficult for reasoning tools to provide a comprehensive comparison of home appliances based on their energy consumption performance and also to provide a comparative analysis of energy consumption of these appliances.

Nazaraf Shah, Kuo-Ming Chao, Tomasz Zlamaniec, Adriana Matei
Adding Semantic Extension to Wikis for Enhancing Cultural Heritage Applications

Wikis are appropriate systems for community-authored content. In the past few years, they show that are particularly suitable for collaborative works in cultural heritage. In this paper, we highlight how wikis can be relevant solutions for building cooperative applications in domains characterized by a rapid evolution of knowledge. We will point out the capabilities of semantic extension to provide better quality of content, to improve searching, to support complex queries and finally to carry out different type of users. We describe the CARE project and explain the conceptual modeling approach. We detail the architecture of WikiBridge, a semantic wiki which allows simple, n-ary and recursive annotations as well as consistency checking. A specific section is dedicated to the ontology design which is the compulsory foundational knowledge for the application.

Éric Leclercq, Marinette Savonnet
K-CRIO: An Ontology for Organizations Involved in Product Design

The aim of this paper is to build an ontology of organizations. This ontology will be used to analyze, reason and understand organizations. The targeted organizations are those composed of individuals involved in the design of a product and, to do so, following a design process. This ontology will be used to support knowledge management within the described organizations. More specifically, the ontology will provide means for reasoning, annotating resources, monitoring design processes, enabling searches and proactively proposing tips and proper content. The presented approach is based upon the use of an existing organizational metamodel, namely CRIO, already used for the description of Multi-Agent Systems (MAS) organizations. In our case, the concepts of this metamodel are used to model human activities.

Yishuai Lin, Vincent Hilaire, Nicolas Gaud, Abderrafiaa Koukam
Improving Web Query Processing with Integrating Intelligent Algorithm and XML for Heterogeneous Database Access

A good methodology for web query processing is needed to improve web query processing. In order to improve web processing in web based application, new and robust methods should be identified. In this research, web query processing for heterogeneous database access issue will determine and provide a proposed method to overcome this issue. The issue is time performance for searching and retrieving process in heterogeneous database access. In order to overcome this problem, intelligent algorithm was designed. This intelligent algorithm was integrated with XML language. XML is powerful tool for data exchange. Experiments have been done by comparing XML and integration between XML and proposed intelligent algorithm. The result indicates the integration between XML and proposed intelligent algorithm is better compared to XML in term of time performance.

Mohd Kamir Yusof, Ahmad Faisal Amri Abidin, Sufian Mat Deris, Surayati Usop
Semantics and Knowledge Management to Improve Construction of Customer Railway Systems

Customer railway system construction during Tendering is challenged by numerous factors: the heterogeneity of customer background (diversity of culture, context, preferences etc.), the complexity of the system to construct and the coordination of several teams with expertise on diverse areas of system development which need to share a common understanding and coherence of the system to deliver.In this paper we show how semantics can contribute to the knowledge management and thus the improvement of the Tendering process. A first study carried out in the company led us to the conclusion that ontologies are a suitable means to overcome the shortcomings that appear during this process and we explain through a model and a first implementation how this can be achieved.

Diana Penciuc, Marie-Hélène Abel, Didier Van Den Abeele, Adeline Leblanc

Algorithms

Blending Planes and Canal Surfaces Using Dupin Cyclides

We develop two different new algorithms of G1-blending between planes and canal surfaces using Dupin cyclides. It is a generalization of existing algorithms that blend revolution surfaces and planes using a plane called construction plane. Spatial constraints were necessary to do that. Our work consist in building three spheres to determine the Dupin cyclide of the blending. The first algorithm is based on one of the definitions of Dupin cyclides taking into account three spheres of the same family enveloping the cyclide. The second one uses only geometric properties of Dupin cyclide. The blending is fixed by a circle of curvature onto the canal surface. Thanks to this one, we can determine a cyclide symmetry plane used for the blend. Each algorithm uses one of the two symmetry planes of a cyclide constructed with the centres of three spheres or orthogonal to the first plane and to the one containing the circle of curvature of the canal surface.

Lucie Druoton, Lionel Garnier, Remi Langevin, Herve Marcellier, Remy Besnard

Multimedia

Avoiding Zigzag Quality Switching in Real Content Adaptive Video Streaming

A high number of videos, encoded in several bitrates, are nowadays available on Internet. A high bitrate needs a high and stable bandwidth, so a lower bitrate encoding is usually chosen and transferred, which leads to lower quality too. A solution is to adapt dynamically the current bitrate so that it always matches the network bandwidth, like in a classical congestion control context. When the bitrate is at the upper limit of the bandwidth, the adaptation switches constantly between a lower and a higher bitrate, causing an unpleasant zigzag in quality on the user machine. This paper presents a solution to avoid the zigzag. It uses an EWMA (Exponential Weighted Moving Average) value for each bitrate, which reflects its history. The evaluation of the algorithm shows that loss rate is much smaller, bitrate is more stable, and so received video quality is better.

Wassim Ramadan, Eugen Dedu, Julien Bourgeois
A Novel Attention-Based Keyframe Detection Method

This paper presents a novel key-frame detection method that combines the visual saliency-based attention features with the contextual game status information for sports videos. First, it describes the approach of extracting the object-oriented visual attention map and illustrates the algorithm for determining the contextual excitement curve. Second, it presents the fusion methodology of visual and contextual attention analysis based on the characteristics of human excitement. The number of key-frames was successfully determined by applying the contextual attention score, while the key-frame selection depends on the combination of all the visual attention scores with bias. Finally, the experimental results demonstrate the efficiency and the robustness of our system by means of some baseball game videos.

Huang-Chia Shih

E-Learning

Evaluating the Effectiveness of Using the Internet for Knowledge Acquisition and Students’ Knowledge Retention

The Internet is the superior mean of communication and information dissemination in the new millennium. It is important to note out that searching for information on the Internet does not necessarily mean that some kind of learning process is going on. While the Internet dramatically changed the dissemination and sharing of information, there is no general consent among researches on the impact of the Internet on students’ and learners knowledge retention. U sing the Internet as a source of information, the student’s knowledge acquisition level and knowledge retention maybe affected for both the short term and long term. In light of that, this study was conducted to measure students’ retained knowledge by acquiring information from the Internet, and find how much of that knowledge was really retained at a later time.

Zakaria Saleh, Alaa Abu Baker, Ahmad Mashhour
Computer-Based Assessment of Implicit Attitudes

Recently, computers are being used extensively in psychological assessment. The most popular measure implicit attitude is the Implicit Association Test “IAT” [2]. According to Aberson and Beeney [3], although the IAT has been applied broadly, little is known about the psychometric properties of the measure. Four different experiments were conducted with four different samples to investigate the temporal reliability of IAT. Also students’ opinion (trust) about the validity and reliability of the test was measured. The results showed that while there are numerous reports of moderate validity of the test, its reliability as measured in this study, particularly for the first time users, is relatively low. Familiarity with similar tests, however, improves its reliability.

Ali Reza Rezaei
First Electronic Examination for Mathematics and Sciences Held in Poland - Exercises and Evaluation System

The development of new information technology poses a question about the future shape of education. You can now ask: Will not traditional course books, the traditional school, a traditional teacher, the traditional examination process change rapidly? The work will be described first as an electronic mock exam in mathematics and then as a natural exam carried out on the Internet in Poland. The aim of this paper is to present the typology of the tasks and the electronic system of evaluation. The E-examination was conducted for the first time in Europe on such a large scale in the real-time (320 Polish schools from all over Poland took part in it, including 3000 students). The conducted trial is the first step towards changing external examinations in Poland.

Jacek Stańdo

Interactive Environments and Emergent Technologies for eLearning

How Can ICT Effectively Support Educational Processes? Mathematical Emergency E-Services - Case Study

Rapid development of Information and Communication Technologies poses many questions about the future of education. Experts have been analyzing the role of traditional course books, examinations, schools and traditional teachers in educational processes and they are wondering about their roles on the threshold of these significant changes in education. Nowadays, the way that young people share the knowledge which they have been provided with has considerably changed. It has already been confirmed by the results and findings of the Mathematical Emergency E-Services project which has been realized for over a year now by the Technical University of Lodz. The article presents, discusses and analyzes e-learning consultations between Mathematics teachers and students in real time carried out within the project’s activities.

Jacek Stańdo, Krzysztof Kisiel
A Feasibility Study of Learning Assessment Using Student’s Notes in An On-Line Learning Environment

The effectiveness and features of ”note-taking” are examined in an on-line learning environment. The relationships between the assessment of the contents of participants’ note taken during class, and the characteristics of students were studied. Some factors about personality and the learning experience are significant, and positively affect the grades given to their notes. Features of notes taken were extracted using a text analysis technique, and these features were compared with the grades given. The good note-takers constantly recorded the terms independently of the number of terms which was presented during the class. Conceptual mapping of the contents of notes was conducted, and it suggests that the deviation in the features of notes can be explained by the number of terms in a lesson.

Minoru Nakayama, Kouichi Mutsuura, Hiroh Yamamoto
IBS: Intrusion Block System a General Security Module for elearning Systems

The design and implementation of a security plug-in for Learning Management Systems is presented. The plug-in (called IBS) can help in protecting a Leaning Management System from a varied selection of threats, carried on by malicious users via internet. Nowadays it is quite likely that the installer and/or administrator of a system are interested teachers, rather than skilled technicians. This is not a problem from the point of view of user friendliness and ease of use of the systems functionalities; those are actually features that motivate the widespread adoption of both proprietary and open source web-based learning systems. Yet, as any other web application, learning systems are subject to seamless discovery and publication of security weaknesses buried into their code. Accordingly, such systems present their administrators with apparent needs for continuous system upgrade and patches installation, which may turn out to became quite a burden for teachers. The integration of IBS in a system allows easing the above mentioned needs and can help the teachers to focus their work more on the pedagogical issues than on the technical ones. We report on the present integration of IBS in two well established open source Learning Management Systems (Moodle and Docebo), allowing for a reasonably standing protection from the threats comprised in five well known classes of “attacks”. Besides describing the plug-in definition and functionalities, we focus in particular on the specification of a whole protocol, devised to guide the adaptation and installation of IBS in any other php-based learning system, which makes the applicability of the plug-in sufficiently wide.

Alessio Conti, Andrea Sterbini, Marco Temperini
Interpretation of Questionnaire Survey Results in Comparison with Usage Analysis in E-Learning System for Healthcare

Organization of the distance form of study is not an easy task. It can be quite complicated, especially regarding the communication or rather the instantaneousness and promptness of giving the feedback to the students. One of the useful methods of increasing the effectiveness of the learning process and quality of students’ results is integrating the on-line content and learning management system. In the paper we describe the applicability of different types of resources and activity modules in the e-learning courses and the worthiness of their usage. The presented ideas are supported by the outcomes of the questionnaire research realized within the e-learning study as well as the usage analysis of particular e-course “Role of a nurse in community care” which was one of the outcomes of the international project E-learning in Community Care. We will also compare the outcomes of data analyses mentioned before and try to find the reasons of these differences.

Martin Cápay, Zoltán Balogh, Mária Boledovičová, Miroslava Mesárošová
Dynamic Calculation of Concept Difficulty Based on Choquet Fuzzy Integral and the Learner Model

Adaptation and personalization of information in E-learning systems plays a significant role in supporting the learner during the learning process. Most personalized systems consider learner preferences, interest, and browsing patterns for providing adaptive presentation and adaptive navigation support. However, these systems usually neglect to consider the dependence among the learning concept difficulty and the learner model. Generally, a learning concept has varied difficulty for learners with different levels of knowledge. Hence, to provide a more personalized and efficient learning path with learning concepts difficulty that are highly matched to the learner’s knowledge, this paper presents a novel method for dynamic calculation of difficulty level for concepts of the certain knowledge domain based on Choquet Fuzzy Integral and the learner knowledge and behavioral model. Finally, a numerical analysis is provided to illustrate the proposed method.

Ahmad Kardan, Roya Hosseini

Signal Processing

Simple Block-Diagonalization Based Suboptimal Method for MIMO Multiuser Downlink Transmission

For MIMO (multiple input multiple output) multiuser downlink processing, block diagonalization (BD) technique is widely used because of its simplicity and high efficiency to eliminate the interuser interferences. In this study, we present very simple weight design approach based on BD, in which the order of design steps is swapped, namely, receiver weights are calculated first, and then transmission weights are derived using zero forcing (ZF) procedure. Computer simulations show that the proposed approach achieves better performance than the conventional BD under certain conditions. In addition, the condition on the degrees of freedom is released, hence it can be utilized for the transmitters with small number of antennas to which BD cannot be applied.

Tetsuki Taniguchi, Yoshio Karasawa, Nobuo Nakajima
Novel Cooperation Strategies for Free-Space Optical Communication Systems in the Absence and Presence of Feedback

In this paper, we investigate cooperative diversity as a fading mitigation technique for Free-Space Optical (FSO) communications with intensity modulation and direct detection (IM/DD). In particular, we propose two novel one-relay cooperation strategies. The first scheme does not require any feedback and is based on selective relaying where the relay forwards only the symbols that it detected with a certain level of certainty that we quantify in both cases of absence or presence of background radiation. This technique results in additional performance enhancements and energy savings compared to the existing FSO cooperative techniques. The second scheme can be applied in situations where a feedback link is available. This scheme that requires only one bit of feedback results in significant performance gains over the entire range of the received signal level.

Chadi Abou-Rjeily, Serj Haddad
Hybrid HMM/ANN System Using Fuzzy Clustering for Speech and Medical Pattern Recognition

The main goal of this paper is to compare the performance which can be achieved by three different approaches analyzing their applications’ potentiality on real world paradigms. We compare the performance obtained with (1) Discrete Hidden Markov Models (HMM) (2) Hybrid HMM/MLP system using a Multi Layer-Perceptron (MLP) to estimate the HMM emission probabilities and using the K-means algorithm for pattern clustering (3) Hybrid HMM-MLP system using the Fuzzy C-Means (FCM) algorithm for fuzzy pattern clustering.Experimental results on Arabic speech vocabulary and biomedical signals show significant decreases in error rates for the hybrid HMM/MLP system based fuzzy clustering (application of FCM algorithm) in comparison to a baseline system.

Lilia Lazli, Abdennasser Chebira, Mohamed Tayeb Laskri, Kurosh Madani
Mobile-Embedded Smart Guide for the Blind

This paper presents a new device to help the blind find his/her way around obstacles. Some people use different things to help with their visual impairment such as

glasses

,

Braille

,

seeing eye dogs

,

canes

, and

adaptive computer technology

. Our device called “Smart Guide” (SG) helps the blind find his/her way by using a mobile phone. SG integrates different technologies, packed in a small compartment, where it consists of a Bluetooth antenna, a sensor, a central processing unit, a memory and speakers. The device has a sensor which sends a signal to the mobile application when detecting any solid objects. After that, the mobile will notify the blind and gives him/her warning alarm to change their direction and get on the right way. The warning alarm can work in three options; voice alarming, a beeping alert, and vibration. The goal of SG is to convert signals of sensing objects to an audio output to help the visually impaired people move without colliding with surrounding objects or people. SG is light to carry, easy to use, affordable and efficient. Interviews and surveys with doctors and a group of blind individuals in an organization for the disabled in Kuwait helped obtaining a very positive feedback on the success of using the device. The device is patented today under Kuwaiti law.

Danyia AbdulRasool, Susan Sabra

Information and Data Management

MMET: A Migration Metadata Extraction Tool for Long-Term Preservation Systems

Migration is the most often used preservation approach in long-term preservation systems. To design a migration plan, custodians need to know about technical infrastructure about a preservation system, characteristics and provenance about digital materials, restrictions about preservation activities, and policies about retention rules. However, current tools cannot provide all these information. They just can output information about formats and characteristics for several given formats. Hence, in this paper, we design a migration metadata extraction tool. This tool uses the stored metadata to retrieve the above information for the custodians. The test results show that due to the limitation on the stored metadata, our solution still cannot get the sufficient information. However, it outputs more migration metadata and has better performance than current tools.

Feng Luan, Mads Nygård
RDF2SPIN: Mapping Semantic Graphs to SPIN Model Checker

The most frequently used language to represent the semantic graphs is the RDF (W3C standard for meta-modeling). The construction of semantic graphs is a source of numerous errors of interpretation. The processing of large semantic graphs is a limit to the use of semantics in current information systems. The work presented in this paper is part of a new research at the border between two areas: the semantic web and the model checking. For this, we developed a tool, RDF2SPIN, which converts RDF graphs into SPIN language. This conversion aims checking the semantic graphs with the model checker SPIN in order to verify the consistency of the data. To illustrate our proposal we used RDF graphs derived from IFC files. These files represent digital 3D building model. Our final goal is to check the consistency of the IFC files that are made from a cooperation of heterogeneous information sources.

Mahdi Gueffaz, Sylvain Rampacek, Christophe Nicolle
C-Lash: A Cache System for Optimizing NAND Flash Memory Performance and Lifetime

NAND flash memories are the most important storage media in mobile computing and tend to be less confined to this area. Nevertheless, it is not mature enough to allow a widespread use. This is due to poor write operations’ performance caused by its internal intricacies. The major constraint of such a technology is the reduced number of erases operations which limits its lifetime. To cope with this issue, state-of-the-art solutions try to level the wear out of the memory to increase its lifetime. These policies, integrated into the Flash Translation Layer (FTL), contribute in decreasing write operation performance. In this paper, we propose to improve the performance and reduce the number of erasures by absorbing them throughout a dual cache system which replaces traditional FTL wear leveling and garbage collection services. C-lash enhances the state-of-the-art FTL performance by more than an order of magnitude for some real and synthetic workloads.

Jalil Boukhobza, Pierre Olivier
Resource Discovery for Supporting Ubiquitous Collaborative Work

The majority of the solutions proposed in the domain of service discovery protocols mainly focus on developing single-user applications. Consequently, these applications are unaware of third-party interventions supposing that nobody interferes nor observes. This paper describes a system for discovering sharable resources in ubiquitous collaborative environments. The proposed system is based on the publish/subscribe model, which makes possible for collaborators to: 1) publish resources to share them with their colleagues and 2) subscribe themselves to get information about resources they are interested in. Dynamic information is gathered from different sources, such as user’s applications, a resource locator and a human face recognizer in order to find out the best available resource for a specific request. Resource availability is determined according to several parameters: technical characteristics, roles, usage restrictions and dependencies with other resources in terms of ownership, presence, location and even availability.

Kimberly García, Sonia Mendoza, Dominique Decouchant, José Rodríguez, Alfredo Piero Mateos Papis
Comparison between Data Mining Algorithms Implementation

Data Mining (DM) is the science of extracting useful information from the huge amounts of data. Data mining methods such as Naïve Bayes, Nearest Neighbor and Decision Tree are tested. The implementation of the three algorithms showed that Naïve Bayes algorithm is effectively used when the data attributes are categorized, and it can be used successfully in machine learning. The Nearest Neighbor is most suitable when the data attributes are continuous or categorized. The last algorithm tested is the Decision Tree, it is a simple predictive algorithm implemented by using simple rule methods in data classification. Each of the three algorithms can be implemented successfully and efficiently after studying the nature of database according to their; size, attributes, continuity and repetition. The success of data mining implementation depends on the completeness of database, that represented by data warehouse, that must be organized by using the important characteristics of data warehouse.

Yas A. Alsultanny
Role of ICT in Reduction of Poverty in Developing Countries: Botswana as an Evidence in SADC Region

Information and communication technologies (ICTs) have penetrated even some of the poorest developing countries. These include the sudden increase of mobile phone use, advent of the internet with its introduction of globalised social networking sites, information and communications technology (ICT) services and saturation of computerised content. Scholars and observers worldwide have sort to debate ICTs roles. Previous research focused on how ICTs impact on economic, social and cultural apsects of life. There is limited research on ICT services and content that focuses on the poor, particulalry those that encourage entrepreneurship as a means to achieve poverty reduction in developing countries. This paper is using secondary data and document analysis from Botswana, a member country to South African Development Community (SADC), to find out how ICTs can be used in poverty reduction in developing countries.

Tiroyamodimo M. Mogotlhwane, Mohammad Talib, Malebogo Mokwena
Personnel Selection for Manned Diving Operations

The selection of personnel for diving operations is a challenging task requiring a detailed assessment of the candidates. In this study, we use computer aided multi-criteria decision support tools in order to evaluate the divers according to their work experience and physical fitness condition.

The importance weights of the sub-criteria are determined using the Analytic Hierarchy Process (AHP) based on expert opinions. Two rankings of six divers for seven job specialization are obtained by their scores of work experience and physical fitness. The diver’s scores according to these two main criteria are used in Data Envelopment Analysis (DEA) to reach an aggregate ranking of the divers.

This methodology enabled us to determine a ranking of the candidates for seven underwater project types, considering all factors in an objective and systematic way, reducing the conflicts and confusions that might result from the subjective decisions.

Tamer Ozyigit, Salih Murat Egi, Salih Aydın, Nevzat Tunc
Digital Inclusion of the Elderly: An Ethnographic Pilot-Research in Romania

This study raises the attention to the interaction between the elderly and new digital technologies. Based on an ethnographic research at one of the first Biblionet Internet Centers opened in Romania through the Global Libraries Project, the research draws its originality from the emic perspective it has embarked on, offering valuable insights into the elderly’ attitudes toward and direct experiences with computer and Internet use.

Corina Cimpoieru
CRAS: A Model for Information Representation in a Multidisciplinary Knowledge Sharing Environment for Its Reuse

The objective of the work was to propose a model for accessing information in an information bearing objects (documents) in a multidisciplinary and collaborative environment for its reuse. A model proposed is associated to information system development for product innovation. Forging this specification of information access is expected to provide a base for programmable methodology for accessing a wide range of information in a consistent manner. For example, it was conceived to provide a base to address an information space with specific parameters. It was also meant to provide methodology of programming such as events, methods and properties for such space. The conception was meant not only to create associated programming methodology but storage of shared knowledge emanating from multidisciplinary collaborative environments. The initial concern was to provide a way to share expertise information for transport product innovation in a region of France.

The work identified four parameters of information bearing objects: Content, Reference, Annotation and Support. These parameters were used to propose methodology of information representation associated to electronic domain. The detailed parameters were explained with examples.

Yueh Hui Kao, Alain Lepage, Charles Robert
A Comparative Study on Different License Plate Recognition Algorithms

In the last decades vehicle license plate recognition systems are as central part in many traffic management and security systems such as automatic speed control, tracking stolen cars, automatic toll management, and access control to limited areas. This paper discusses common techniques for license plate recognition and compares them based on performance and accuracy. This evaluation gives views to the developers or end-users to choose the most appropriate technique for their applications. The study shows that the dynamic programming algorithm is the fastest and the Gabor transform is the most accuracy algorithm.

Hadi Sharifi, Asadollah Shahbahrami
A Tag-Like, Linked Navigation Approach for Retrieval and Discovery of Desktop Documents

Computer systems provide users with abilities to create, organize, store and access information. Most of this information is in the form of documents in files organized in the hierarchical folder structures provided by operating systems. Operating system-provided access is mainly through structure- guided navigation, and through keyword search. An investigation with regard to access and utilization of these documents revealed a need to reconsider these navigation methods. An improved method of access to these documents is proposed based on previous effective metadata use in search system-retrieval and annotation systems. The underlying organization is based on a model for navigation whereby documents are represented using index terms and associations between them exposed to create a linked, similarity-based navigation structure. Evaluation of an interface instantiating this approach suggests it can reduce the user’s cognitive load and enable efficient and effective retrieval while also providing cues for discovery and recognition of associations between documents.

Gontlafetse Mosweunyane, Leslie Carr, Nicholas Gibbins
Heuristic Approach to Solve Feature Selection Problem

One of the successful methods in classification problems is feature selection. Feature selection algorithms; try to classify an instance with lower dimension, instead of huge number of required features, with higher and acceptable accuracy. In fact an instance may contain useless features which might result to misclassification. An appropriate feature selection methods tries to increase the effect of significant features while ignores insignificant subset of features. In this work feature selection is formulated as an optimization problem and a novel feature selection procedure in order to achieve to a better classification results is proposed. Experiments over a standard benchmark demonstrate that applying harmony search in the context of feature selection is a feasible approach and improves the classification results.

Rana Forsati, Alireza Moayedikia, Bahareh Safarkhani
Context-Aware Systems: A Case Study

Context aware systems are a promising approach to facilitate daily-life activities. Concerning communication services, business users may be sometimes overloaded with work so that they become temporally unable to handle incoming communications. After having surveyed the challenges to build context-aware systems, we introduce here HEP, a system that recommends communication services to the caller based on the callee’s context. HEP’s main context source is the usage history of the different communication services as well as the users’ calendars. It has been prototyped and tested at Orange Labs.

Bachir Chihani, Emmanuel Bertin, Fabrice Jeanne, Noel Crespi
Handling Ambiguous Entities in Human-Robot Instructions Using Linguistic and Context Knowledge

A behaviour-based control has been considered as the best approach to control autonomous robots. The robots have been expected to assist people in a people living environment such as houses or offices. Such situations require natural interaction between people and the robots. One way to facilitate this is by deploying a natural language interface (NLI) for human-robot interaction. The major obstacle in developing the NLI is natural language is always ambiguous. The ambiguity problem occurs when a word in a human instruction may have more than one meaning. Up to date, there is no existing NLI processor which can well resolve the problem. This paper presents a framework and an approach for resolving the problem. The approach is developed by utilizing fuzzy sets and possibility theory on linguistic and context knowledge.

Shaidah Jusoh, Hejab M. Alfawareh
Cognitive Design of User Interface: Incorporating Cognitive Styles into Format, Structure and Representation Dimensions

In line to the introduction of the online museum, a user interface is a new medium to allow museum collections to be exhibited and promoted more effectively. For promoting purposes, this paper stresses on an establishment of the digital collection structure and the improvement of the interface design for users. It is critical to pay attention to the cognitive design and ensure that the interface is usable in order to help users understand the displayed information. The user interface design for online museum has commanded significant attention from designers and researchers but lacks in the cognitive design perspective. In the effort to formalize the design, this report is the extension of the initial work on a user interface design framework in understanding cognitive based user interface design. The individual differences approach is adopted to explore possible user interface design elements. This study improves the framework by conducting empirical investigation to test the hypotheses linkage between cognitive styles and user interface dimensions. The research method involves using Field Dependent and Field Independent as the case study and web-based survey on online museum visitors. The result of the analysis suggests cognitive styles do influence user interface dimensions. These design elements contain the implications for user interface design of the online environment, cognitive design of user interface development, and the identification of cognitive design of user interface for cultural website that support user while browsing for museum online collections. The effort may contribute towards increasing the usability level of the website.

Natrah Abdullah, Wan Adilah Wan Adnan, Nor Laila Md Noor
Communications in Computer and Information Science: The Impact of an Online Environment Reading Comprehension: A Case of Algerian EFL Students

In this study we used a statistical analysis, based on a sub-sample of participants enrolled in the fifth years of Computer Science Department of Batna (Algeria) University, the effects of an online environment reading comprehension were investigated. The students’ native language was Arabic, and they were learning English as a second foreign language. The two research questions of this study are: 1. Are there any differences in students’ reading paper-andpencil / web mode? 2. Are there any differences in students’ reading individual / collaborative mode? The paper proves that working with our online learning environment significantly improved the students’ motivation and positively affected higher-level knowledge and skills.

Samir Zidat, Mahieddine Djoudi
Backmatter
Metadaten
Titel
Digital Information and Communication Technology and Its Applications
herausgegeben von
Hocine Cherifi
Jasni Mohamad Zain
Eyas El-Qawasmeh
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-22027-2
Print ISBN
978-3-642-22026-5
DOI
https://doi.org/10.1007/978-3-642-22027-2

Premium Partner