Skip to main content

2011 | Buch

Digital Information Processing and Communications

International Conference , ICDIPC 2011, Ostrava, Czech Republic, July 7-9, 2011, Proceedings, Part I

herausgegeben von: Vaclav Snasel, Jan Platos, Eyas El-Qawasmeh

Verlag: Springer Berlin Heidelberg

Buchreihe : Communications in Computer and Information Science

insite
SUCHEN

Über dieses Buch

This two-volume-set (CCIS 188 and CCIS 189) constitutes the refereed proceedings of the International Conference on Digital Information Processing and Communications, ICDIPC 2011, held in Ostrava, Czech Republic, in July 2011. The 91 revised full papers of both volumes presented together with 4 invited talks were carefully reviewed and selected from 235 submissions. The papers are organized in topical sections on network security; Web applications; data mining; neural networks; distributed and parallel processing; biometrics technologies; e-learning; information ethics; image processing; information and data management; software engineering; data compression; networks; computer security; hardware and systems; multimedia; ad hoc network; artificial intelligence; signal processing; cloud computing; forensics; security; software and systems; mobile networking; and some miscellaneous topics in digital information and communications.

Inhaltsverzeichnis

Frontmatter

Network Security

Mobile Services Providing Devices Rating Based on Priority Settings in Wireless Communication Systems

Industrial process control is becoming very important nowadays as more and more systems are integrated with most modern information technology based tools to achieve better process and its control performance results. Here the main problems arise during the execution operations due to poor coordination of the initial safety conditions and absence of services providing devices rating capability with further content error corrections in real time. The main solution would consist of secure middleware based software tools integration in the main service providing systems with devices rating based on the priority settings.

Danielius Adomaitis, Violeta Bulbenkienė, Sergej Jakovlev, Gintaras Pridotkas, Arūnas Andziulis
Symmetric Cryptography Protocol for Signing and Authenticating Digital Documents

The fast growth of technology and the demands of contemporary globalization have meant that most organizations use electronic means for communication and formalization of documents. One of the main problems faced by reviewers is recognizing counterfeit documents. The electronic signature of documents is a preliminary solution to the problem. In our research we are interested in developing a web service that allows to sign documents and verify their authenticity. To protect this web service against malicious activity there are aspects of computer security that must be considered such as steganography, cryptography and security protocols. In this article we introduce a security protocol using symmetric cryptography scheme in order to sign and authenticate digital documents. We have verified formally this protocol and found out that it provides the fourth level of authentication according to Lowe’s hierarchy. In addition, we also address security aspects that must be taken into account to avoid attacks in these kinds of applications and some implementations we are developing.

Juan C. Lopez Pimentel, Raúl Monroy, Víctor Fernando Ramos Fon Bon
Cooperative Black-Hole Attack: A Threat to the Future of Wireless Mesh Networking

Wireless Mesh Network has come out to be the next generation wireless communication technology. It provides a good solution to cheap wireless networking. The Network Layer in WMN is responsible for routing packets delivery and intruders take a deep interest in sabotaging this layer. Black-hole attack is a type of Denial-of-Service (DoS) attack which when carried out can disrupt the services of this layer. When the malicious nodes collude, the attack is called Co-operative Black-hole attack and the result of this attack is far more severe. The aim of this paper is to reflect light on the drawbacks of the security mechanisms being used for the mitigation of this attack and propose an improved security schema.

Shree Om, Mohammad Talib

Web Applications

Towards a Generic Cloud-Based Modeling Environment

This paper is aimed at presenting a concept of a flexible diagramming framework for building engineering and educational applications. The framework was designed to serve as a platform for online services and collaborative environments where users typically work on remotely stored, shared data through a browser-based user interface. The paper summarizes the common requirements towards such services, overviews related approaches and gives insights into some design challenges through the analysis of use-cases. The design problem is examined from a user-centered view: the key motivation of our research is to find innovative, possibly device-independent solutions that enable seamless user experiences. Finally a generic framework based on a HTML-JavaScript library is proposed, which could be employed for implementing wide range of software solutions from e-learning to cloud-based modeling environments.

Laszlo Juracz, Larry Howard
Tool for Sun Power Calculation for Solar Systems

This paper deals with the visualization of maps, spatial calculations of solar and photovoltaic systems. In particular, the issue of solar radiation and proper orientation and slope of photovoltaic panels in relation with the type and inclination of the landscape is discussed. The first part of the work focuses on the EU’s interactive maps (Photovoltaic Geographical Information System) and describes the calculations and the input data used for developing the application; it is calculation of intensity of solar radiation data and SRTM DEM and ASTER GDEM data. The second part is devoted to the development of the custom applications. Applications use the Open Source control from the project GMaps.NET, Google maps and Silverlight Bing maps. In conclusion, results are compared to the EU model with own solutions.

Michal Paluch, Radoslav Fasuga, Martin Nemec
FWebQ – A Framework for Websites Quality Evaluation

Websites quality is strategically important for organizations and for the satisfaction of their clients. In this paper we propose a high-level structure for a global quality evaluation of a website. This structure is based in three main dimensions (contents, services, technical), characteristics, sub-characteristics and attributes, that will substantiate the development of broad websites quality evaluation, comparison and improvement methodologies, according to particular sectors of activity.

Álvaro Rocha
A JRuby Infrastructure for Converged Web and SIP Applications

In this paper we present a Ruby infrastructure that can be used for rapid development of Web applications with SIP signaling capabilities. We construct this infrastructure by combining the Java based Cipango SIP/HTTP Servlet Application Server with the Ruby on Rails Web development framework. We provide detailed explanations of the steps required to build this infrastructure and produce a SIP registrar example application with a simple Web interface. The described infrastructure allows Ruby applications to utilize the entire functionality provided by the SIP Servlet API and can be used as a good starting point for the development of Ruby-based domain specific languages for the SIP protocol. We also compare the proposed infrastructure with the existing Ruby frameworks for SIP application development.

Edin Pjanić, Amer Hasanović

Data Mining

XML Documents – Publishing from Relational Databases

XML language tends to become a standard for data exchanges between various Web applications. However, despite the big extent of the XML language, most of this applications store their information in relational databases. This fact is unlikely to be changed considering many advantages of the relational database systems like: fiability, scalability, performance and working tools. Thus, while XML language is under development, the necessity of some mechanism to publish XML documents from relational databases is obvious.

Mihai Stancu
Simple Rules for Syllabification of Arabic Texts

The Arabic language is the sixth most used language in the world today. It is also used by United Nation. Moreover, the Arabic alphabet is the second most widely used alphabet around the world. Therefore, the computer processing of Arabic language or Arabic alphabet is more and more important task. In the past, several books about analyzing of the Arabic language were published. But the language analysis is only one step in the language processing. Several approaches to the text compression were developed in the field of text compression. The first and most intuitive is character based compression which is suitable for small files. Another approach called word-based compression become very suitable for very long files. The third approach is called syllable-based, it use syllable as basic element. Algorithms for the syllabification of the English, German or other European language are well known, but syllabification algorithms for Arabic and their usage in text compression has not been deeply investigated. This paper describes a new and very simple algorithm for syllabification of Arabic and its usage in text compression.

Hussein Soori, Jan Platos, Vaclav Snasel, Hussam Abdulla
Automatic Classification of Sleep/Wake Stages Using Two-Step System

This paper presents application of an automatic classification system on 53 animal polysomnographic recordings. A two-step automatic system is used to score the recordings into three traditional stages: wake, NREM sleep and REM sleep. In the first step of the analysis, monitored signals are analyzed using artifact identification strategy and artifact-free signals are selected. Then, 30sec epochs are classified according to relevant features extracted from available signals using artificial neural networks. The overall classification accuracy reached by the presented classification system exceeded 95%, when analyzed 53 polysomnographic recordings.

Lukáš Zoubek, Florian Chapotot
T-SPPA: Trended Statistical PreProcessing Algorithm

Traditional machine learning systems learn from non-relational data but in fact most of the real world data is relational. Normally the learning task is done using a single flat file, which prevents the discovery of effective relations among records. Inductive logic programming and statistical relational learning partially solve this problem. In this work, we resource to another method to overcome this problem and propose the T-SPPA: Trended Statistical PreProcessing Algorithm, a preprocessing method that translates related records to one single record before learning. Using different kinds of data, we compare our results when learning with the transformed data with results produced when learning from the original data to demonstrate the efficacy of our method.

Tiago Silva, Inês Dutra
Traffic Profiling in Mobile Networks Using Machine Learning Techniques

This paper tackles a problem of identifying characteristic usage profiles in the traffic related to packet services (PS) in mobile access networks. We demonstrate how this can be done through clustering of vast amounts of network monitoring data, and show how discovered clusters can be used to mathematically model the PS traffic. We also demonstrate accuracy of the models obtained using this methodology. This problem is important for accurate dimensioning of the infrastructure for mobile access network.

Henryk Maciejewski, Mateusz Sztukowski, Bartlomiej Chowanski
A Deterministic Approach to Association Rule Mining without Attribute Discretization

In association rule mining, when the attributes have numerical values the usual method employed in deterministic approaches is to discretize them defining proper intervals. But the type and parameters of the discretization can affect notably the quality of the rules generated. This work presents a method based on a deterministic exploration of the interval search space, with no use of a previous discretization but the dynamic generation of intervals. The algorithm also employs auxiliary data structures and certain optimizations to reduce the search and improve the quality of the rules extracted. Some experiments have been performed comparing it with the well known deterministic Apriori algorithm. Also, the algorithm has been used for the extraction of association rules from a dataset with information about Sub-Saharan African countries, obtaining a variety of good-quality rules.

Juan L. Domínguez-Olmedo, Jacinto Mata, Victoria Pachón, Manuel J. Maña
Visualization and Simulation in Data Processing with the Support of Business Intelligence

The paper aim is suggests a way of improvement opportunities for existing solution in Business Intelligence (BI) product architecture. The useful inspirations are links to database and operating systems. BI architecture is too wide, recommended merger leads to simplification on base integration of system sources and available methods for data integration. BI products offer procedures with stored data for future data processing. The benefit is the advantage of a relatively large variety of analyses, although a question remains concerning their utility and the complexity of the implementation. For the evaluation of practical applications, they must accept the dynamic and diverse needs of business and individuals. Presented analysis of selected products is based on Petri Nets with simulation support, and future analysis using an incidence matrix and reachable markings. This analysis helps in search of new perspectives and innovations.

Milena Janakova

Neural Networks

Using Two-Step Modified Parallel SOM for Song Year Prediction

This paper uses a simple modification of classic Kohonen network (SOM), which allows parallel processing of input data vectors or partitioning the problem in case of insufficient memory for all vectors from the training set for computation of SOM by CUDA on YearPredictionMSD Data Set. The algorithm, presented in previous paper pre-selects potential centroids of data clusters and uses them as weight vectors in the final SOM network. The sutability of this algorithm has been already demonstrated on images as well as on two well-known datasets of hand-written digits.

Petr Gajdoš, Pavel Moravec
Modal Analysis – Measurements versus FEM and Artificial Neural Networks Simulation

The article deals with the experimental modal analysis of glass laminates plates with different shape and these results are compared with those obtained by applications of the artificial neural networks (ANN) and finite element method (FEM) simulation. We have investigated the dependence of the generated mode frequency as a function of sample thickness as well as the sample shape (rounding) of glass laminate samples. The coincidence of both experimental and simulated results is very good.

David Seidl, Pavol Koštial, Zora Jančíková, Ivan Ružiak, Soňa Rusnáková, Martina Farkašová
View from Inside One Neuron

Neural networks started to be developed around the idea of creating models of real neural systems because of the existing need in understanding how biological systems work. New areas of neural computing are trying to make at least one step beyond what digital computing means. The key point is based on learning rather than programming. We designed a mathematical model for information processing. The neuron model is viewed from inside as a feedback system that controls the information flow. The process of learning at ”molecular” level (internal neural learning), under the form of a computational algorithm inspired by a real brain functioning, has been introduced.

Luciana Morogan

Distributed and Parallel Processing

A Fault Tolerant Scheduling Algorithm for DAG Applications in Cluster Environments

Fault tolerance is an essential requirement in systems running applications which need a technique to continue execution where some system components are subject to failure. In this paper, a fault tolerant task scheduling algorithm is proposed for mapping task graphs to heterogeneous processing nodes in cluster computing systems. The starting point of the algorithm is a DAG representing an application with information about the tasks. This information consists of the execution time of the tasks on the target system processors, communication times between the tasks having data dependencies, and the number of the processor failures (

ε

) which should be tolerated by the scheduling algorithm. The algorithm is based on the active replication scheme, and it schedules

ε

+1 replicas of each task to achieve the required fault tolerance. Simulation results show the efficiency of the proposed algorithm in spite of its lower complexity.

Nabil Tabbaa, Reza Entezari-Maleki, Ali Movaghar
A Bee Colony Task Scheduling Algorithm in Computational Grids

The efficient scheduling of the independent and sequential tasks on distributed and heterogeneous computing resources within grid computing environments is an NP-complete problem. Therefore, using heuristic approaches to solve the scheduling problem is a very common and also acceptable method in these environments. In this paper, a new task scheduling algorithm based on bee colony optimization approach is proposed. The algorithm uses artificial bees to appropriately schedule the submitted tasks to the grid resources. Applying the proposed algorithm to the grid computing environments, the maximum delay and finish times of the tasks are reduced. Furthermore, the total makespan of the environment is minimized when the algorithm is applied. The proposed algorithm not only minimizes the makespan of the environment, but also satisfies the deadline and priority requirements of the tasks. Simulation results obtained from applying the algorithm to different grid environments show the prominence of the algorithm to other similar scheduling algorithms.

Zohreh Mousavinasab, Reza Entezari-Maleki, Ali Movaghar
Interaction Distribution Network

Content Distribution Network (CDN) has been effective in accelerating the access and growth for web content such as web pages and streaming audio and video. However, the relatively static and bulky data CDNs are designed to serve makes them unsuitable to support latency-sensitive interactive streams such as network games or real-time conferencing. In this position paper, we describe the concepts for an Interaction Distribution Network (IDN), which supports small, interactive data streams in a scalable manner.

An IDN shares certain concepts similar to a CDN in that end-user clients connect to the closest serving proxies for accessing and updating interaction data packets. However, the key differences lies in the bi-directional nature of the interaction streams, and that the data streams may belong to a certain “interaction group,” within which some additional processing on the data are possible. An IDN may support existing instant messenger (IM) services and Massively Multiplayer Online Games (MMOGs), while enabling new usage scenarios. We discuss the key challenges, potential solutions, and implications of IDNs in this paper.

Shun-Yun Hu
Bi-relational P/T Petri Nets and the Modeling of Multithreading Object-Oriented Programming Systems

Bi-relational P/T Petri nets are the newly introduced class of Petri nets, whose implementation and definitions are the main topics of this paper; they feature certain new and original concepts when compared with conventional P/T Petri nets, which can be successfully used at a design, modeling and verification of multithreading object-oriented programming systems executing in parallel or distributed environment. In this paper basic characteristics of bi-relational P/T Petri nets are very briefly presented including possibilities in their definition of newly introduced net static pages and dynamic pages, net dynamic pages instances and their possible dynamic creation and destruction, functionalities of multiarcs and the mechanism of the execution of transitions at the modeling of object-oriented programming systems. The concept of subpages of net pages and its application in the modeling of the declared static and non-static variables and methods including the class inheritance and polymorphism, is also of an interest. Basic principles of bi-relational P/T Petri nets could be then further applied when defining the bi-relational object Petri Nets.

Ivo Martiník
Developing Parallel Applications Using Kaira

We are developing a tool named Kaira. This tool is intended for modelling, simulation and generation of parallel applications. Modelling is based on the variant of Coloured Petri Nets. Coloured Petri Nets provide the theoretical background and we use their syntax and semantics. Moreover our tool can automatically generate standalone parallel applications from the model. In this paper we present how to develop parallel applications in Kaira. Like an example we use two dimensional heat flow problem solved by Jacobi finite difference method. We present different aspects and different approaches how to model this problem in Kaira on different levels of abstraction.

Stanislav Böhm, Marek Běhálek, Ondřej Garncarz
Context-Aware Task Scheduling for Resource Constrained Mobile Devices

Nowadays, mobile devices are very popular and accessible. Therefore users prefer to substitute mobile devices for stationary computers to run their applications. On the other hand, mobile devices are always resource-poor in contrast with stationary computers and portability and limitation on their weight and size restrict mobile devises’ processor speed, memory size and battery lifetime. One of the most common solutions, in pervasive computing environments, to resolve the challenges of computing on resource constrained mobile devices is cyber foraging, wherein nearby and more powerful stationary computers called surrogates are exploited to run the whole or parts of applications. However, cyber foraging is not beneficial for all circumstances. So, there should be a solver unit to choose the best location either the mobile device or a surrogate to run a task. In this paper, we propose a mechanism to select the best method between local execution on the mobile device and remote execution on nearby surrogates to run an application by calculating the execution cost according to the context’s metrics such as mobile device, surrogates, network, and application specifications. Experimental results show the superiority of our proposed mechanism compared to local execution of the application on the mobile device and blind task offloading with respect to latency.

Somayeh Kafaie, Omid Kashefi, Mohsen Sharifi

Biometrics Technologies

An Application for Singular Point Location in Fingerprint Classification

Singular Point (SP) is one of the local fingerprint features, and it is used as a landmark due its scale and rotation immutability. SP characteristics have been widely used as a feature vector for many fingerprint classification approaches. This paper introduces a new application of singular point location in fingerprint classification by considering it as a reference point to the partitioning process in the proposed pattern-based classification algorithm. The key idea of the proposed classification method is dividing fingerprint into small sub images using SP location, and then, creating distinguished patterns for each class using frequency domain representation for each sub-image. The performance evaluation of the SP detection and the proposed algorithm with different database sub-sets focused on both the processing time and the classification accuracy as key issues of any classification approach. The experimental work shows the superiority of using singular point location with the proposed classification algorithm. The achieved classification accuracy over FVC2002 database subsets is up to 91.4% with considerable processing time and robustness to scale, shift, and rotation conditions.

Ali Ismail Awad, Kensuke Baba
A Classification-Based Graduates Employability Model for Tracer Study by MOHE

This study is to construct the Graduates Employability Model using data mining approach, in specific the classification task. To achieve it, we use data sourced from the Tracer Study, a web-based survey system from the Ministry of Higher Education, Malaysia (MOHE) since 2009. The classification experiment is performed using various Bayes algorithms to determine whether a graduate has been employed, remains unemployed or in an undetermined situation. The performance of Bayes algorithms are also compared against a number of tree-based algorithms. In conjunction with tree-based algorithm, Information Gain is used to rank the attributes and the results showed that top three attributes that have direct impact on employability are the job sector, job status and reason for not working. Results showed that J48, a variant of decision-tree algorithm performed with highest accuracy, which is 92.3% as compared to the average of 90.8% from other Bayes algorithms. This leads to the conclusion that a tree-based classifier is more suitable for the tracer data due to the information gain strategy.

Myzatul Akmam Sapaat, Aida Mustapha, Johanna Ahmad, Khadijah Chamili, Rahamirzam Muhamad
Improving Network Performance by Enabling Explicit Congestion Notification (ECN) in SCTP Control Chunks

The need for a reliable transmission protocol that can cover the Transport Control Protocol (TCP) and User Datagram Protocol (UDP) has prompted the Internet Engineering Task Force (IETF) to define a new protocol called the Stream Control Transmission Protocol (SCTP). This paper proposes adding Explicit Congestion Notification (ECN) mechanism into SCTP chunks (INIT chunk, and INIT-ACK chunk) to reduce the delay of transferring important data during congestion as compared with the TCP and UDP protocols. This paper also discusses the details of adding ECN, and the reason for choosing Random Early Detection (RED). Through the experimental analysis, we compare SCTP enabled ECN in INIT-ACK chunk to SCTP without ECN enabled in INIT-ACK chunk and demonstrate the result of ECN impact on SCTP delay time.

Hatim Mohamad Tahir, Abas Md Said, Mohammed J. M. Elhalabi, Nurnasran Puteh, Azliza Othman, Nurzaid Muhd Zain, Zulkhairi Md Dahalin, Muhammad Hafiz Ismail, Khuzairi Mohd Zaini, Mohd Zabidin Hussin

E-Learning

Obtaining Knowledge in Cases with Incomplete Information

Providing students with timely feedback on their level of concepts understanding, learning and skills mastering is time consuming. One way to resolve this issue is to involve computer based assessment tools. When optional tests are considered educators have to address the problem of incomplete information.

The research described in this paper aims at facilitating the process of drawing conclusions while working with incomplete information. The approach is based on formal concept analysis and six valued logic. The Armstrong axioms are further applied to obtain knowledge in cases with incomplete information.

Sylvia Encheva
Complete Graphs and Orderings

Ranking of help functions with respect to their usefulness is in the main focus of this work. In this work a help function is regarded as useful to a student if the student has succeeded to solve a problem after using it. Methods from the theory of partial orderings are further applied facilitating an automated process of suggesting individualised advises on how to proceed in order to solve a particular problem. The decision making process is based on the common assumption that if given a choice between two alternatives, a person will choose one. Thus obtained partial orderings appeared to be all linear orders since each pair of alternatives is compared.

Sylvia Encheva
Communications in Computer and Information Science: Assumptions for Business and IT Alignment Achievement

A key factor for a successful organization, in a current dynamic environment, is effective information technology (IT) supporting business strategies and processes. Aligning IT with business depends on many factors, including the level of communication and cooperation between top management of organizations and top management of IT departments. This paper builds on the rules published in [5] which create prerequisites for the achievement of business and IT alignment. The paper discusses the outcomes of the survey realized in six hundred organizations operating on the Czech market. The survey aimed to determine whether the organization meets the basic assumptions, for it to be potentially mature in business/IT alignment, in order to select such organizations for future research into this matter. Characteristics of such organizations were deduced from the survey data. Survey results and analyses provide the base and direction for further business/IT alignment maturity research.

Renáta Kunstová
Measuring Students’ Satisfaction in Blended Learning at the Arab Open University – Kuwait

Blended learning is widely used in higher education institutions. The global prominence of this emerging trend is the result of the technological revolution that offered new possibilities of interactivity. The integration of e-learning as a complement rather than a substitute for traditional learning created the hybrid approach to learning called Blended learning. This paper attempts to measure the students’ satisfaction they receive from their studies at The Arab Open University – Kuwait Branch within a blended system. Student satisfaction is reliant on factors such as the academic support provided by the tutors, teaching materials, teaching pedagogy, the range of the academic subjects taught, the IT infrastructure, the curriculum, e-library and the different assessment provided by the institution. The aim of this paper is three-fold: first, to measure student satisfaction by developing an appropriate questionnaire that covers most of the sources of satisfaction and dissatisfaction areas; second, to identify the constructs that are critical to a satisfying blended learning experience; and finally, to provide a feedback to the AOU officials on the effect of environmental factors to be used as a guide for further extension of regional branches in some other Arabic countries. To achieve these goals, a questionnaire was conducted containing fourteen items. Students responded from different programs and different courses (

n

=165).Data Analysis showed that AOU- Kuwait Branch enjoys a high rate of student satisfaction.

Hassan Sharafuddin, Chekra Allani
Educational Technology Eroding National Borders

Businesses are continuously forced to reeducate and train their new workforce before they can be productive. The existing employees also need to improve their skills or retrain for promotion/growth. To cater to this breed of students ”schooling business” is also faced with an inescapable demand to design new programs/courses. The growth in educational technology is also influencing academic institutions to redefine their endeavors in terms of producing learning while providing instructions. With the growing demand for higher education such endeavors are crossing the physical boundaries of many countries. We are using online and hybrid learning models to attract traditional, distance and international students. A major component of this model is the web-based course management system that provides an environment for teaching/learning and interaction 24/7. This model optimally combines interactive-classroom, web-based lectures and traditional instructions and has successfully offered courses at SUNY-Fredonia with students registering from around the globe. This model is presented and the opportunities and challenges of web technologies in education are discussed.

Khalid J. Siddiqui, Gurmukh Singh, Turhan Tunali
Multimedia Distance E-Learning System for Higher Education Students

The recent advances in information technologies with the increasing interest of large number of individuals in the developing countries to get higher education have led to the evolution of online distance education. Distance e-learning has a possibility to offer education to students who are busy or being unable to attend face-to-face classroom lectures. This study focuses specifically on how distance learning will impact the learning of students taking online software engineering course that includes a lab. It examines the ways that the teacher and the students will perceive the online course so as to be an effective and positive educational experience. The system effectively integrates the multimedia contents of the study materials with the virtual lab and simulation of computer-generated diagrams and models that enhances the productivity of educational process.

Nedhal A. Al Saiyd, Intisar A. Al Sayed
Analysis of Learning Styles for Adaptive E-Learning

In adaptive e-learning we try to make learning more efficient by adapting the process of learning to students’ individual needs. To make this adaptation possible, we need to know key students characteristics – his motivation, group learning preferences, sensual type and various learning styles. One of the easiest ways to measure these characteristics is to use questionnaires. New questionnaire was created because there was no questionnaire to measure all these characteristics at once. This questionnaire was filled by 500 students from different fields of study. These results were analyzed using clustering, decision tree and principal component analysis. Several interesting dependencies between students’ properties were discovered using this analysis.

Ondřej Takács, Jana Šarmanová, Kateřina Kostolányová
A Cloud Service on Distributed Multiple Servers for Cooperative Learning and Emergency Communication

A distributed multiple server system is designed and implemented with Web-DB based services, which can play an important role not only to provide an environment for cooperative learning but also to support a function for emergency communication. In many instances, such an environment or a function used to be designed as so-called dedicated system, which can perform only single purpose. In other words, these different functions frequently seem to be mutually exclusive so that they may be realized independently with absolutely different methodologies. In our case, however, two different specifications have been accomplished by one identical system. The system has employed multiple servers located in a distributed campus network environment. Each server has multi-core processors. With virtualized CPUs by server virtualization, some programs are executed in parallel (on the virtual servers) so that our system can efficiently perform several functions. Based on our related works, two major applications are realized as a Cloud services on the system. It can provide a cooperative learning environment for educational tool as well as Web-based surveillance functions with emergency contact.

Yoshio Moritoh, Yoshiro Imai, Hiroshi Inomo, Wataru Shiraki

Information Ethics

Analysis of Composite Indices Used in Information Society Research

The problem of measurement is the Achilles heel of the information society research. Despite this, or possibly for this reason, one can find a lot of different quantitative studies of this issue. Broader media popularity gain studies using composite indices for such examination. The paper presents the results of the analysis of most popular composite indices used in the information society research. Discussed are the basic methodological problems as well as the strengths and weaknesses of using such tools.

Michał Goliński

Image Processing

A Survey on Image Steganography Algorithms and Evaluation

The technique of hiding confidential information within any media is known as Stegonography. Since both stegonography and cryptography are used to protect confidential information, the two are used interchangeably. This is wrong because the appearance of the output from the two processes is totally different; the output from a stegonography operation is not apparently visible while for cryptography the output is scrambled such that it easily draws attention. The detection of the presence of stegonography is referred to as Steganlysis. In this article we have attempted to elucidate the main image steganography approaches, and then we will evaluate the techniques and result will show in a table and discuss.

Arash Habibi Lashkari, Azizah Abdul Manaf, Maslin Masrom, Salwani Mohd Daud
Real-Time Monitoring Environment to Detect Damaged Buildings in Case of Earthquakes

Detection of damaged buildings in case of an earthquake is a vital process for rapid response to casualties caused by falling debris before their health deteriorates. This paper presents a system for real time detection of damaged buildings in the case of an earthquake, using video cameras to supply information to national disaster management centers. The system consists of video cameras which monitor buildings continuously, an algorithm to detect damaged buildings rapidly using video images, and a database to store data concerning sound and damaged buildings. The detection method used in the system is applicable for local areas only (several small buildings or one big building), but it may provide accurate information about the buildings immediately after a disaster.

Yasar Guneri Sahin, Ahmet Oktay Kabar, Bengi Saglam, Erkam Tek
Face Recognition Algorithm Using Two Dimensional Principal Component Analysis Based on Discrete Wavelet Transform

The roles of this paper is to improve the face recognition rate by applying different levels of discrete wavelet transform(DWT) as to reduce the high dimensional image into a low dimensional image. Two dimensional principal component analyze (2DPCA) is being utilized to find the face recognition accuracy rate, processing it through the ORL image database. This database contains images from 40 persons (10 different images for each) in grayscale and resolution of 92x112 pixels. An evaluation between 2DPCA and multilevel-DWT/2DPCA has been done. These have been assessed according to the recognition accuracy, recognition rate, dimensional reduction, computing complexity and multi-resolution data approximation. The results show that the recognition rate across all trials was higher using 2-level DWT/2DPCA than 2DPCA with a time rate of 4.28 sec. Also, these experiments indicate that the recognition time has been improved from 692.91sec to 1.69sec. with recognition accuracy from 90% to 92.5% .

Venus AlEnzi, Mohanad Alfiras, Falah Alsaqre
Construction and Interpretation of Data Structures for Rotating Images of Coins

This article is oriented to the processing of multimedia data. This is a proposal for a rapid and computationally efficient method that is applicable in the Internet environment with the minimum requirements for hardware and software. The article mapping the current state of the recognition of multimedia data independent of the rotation and deformation. Proposes its own solution for pre-processing, storing and indexing images in the form of vectors. Here are discussed various methods of searching in image data. For the purposes of detecting commemorative coins, the approach was chosen based on edge detection and subsequent indexing. Directions are outlined to further development of the whole project. A particular focus on effective pre-processing and indexing large collections of data.

Radoslav Fasuga, Petr Kašpar, Martin Šurkovský, Martin Němec
Digitalization of Pixel Shift in 3CCD Camera-Monitored Images

This paper describes a phenomenon “pixel shift” which sometimes occurs in images monitored by 3CCD Cameras and others. If there is such a pixel shift in the image, a user is suffering from a lack of precision of the image to be measured or utilized for image recognition and other image understanding. In a conventional approach, a specialist has determined whether pixel shift occurs or not in the image and he/she has reported performance evaluation for the relevant camera which monitors the image. This paper proposes a numerical method to detect occurrence of pixel shift in the image with Lagrange interpolation function, calculates the values of the level for occurrence of pixel shift quantitatively, and then compares the calculated results with the determination by specialist for 20 sample image files. The comparison results have good scores and support probability of a proposed approach to detect occurrence of pixel shift in the image without help from specialist. This paper calls the below sequence of procedure “digitalization of pixel shift”, namely, 1) reading the values of R, G, and B signals from the target image, 2) selecting some sample values, 3) interpolating a suitable approximation function with the previous sample values, and 4) calculating numerically phase difference of R, G, B signals as the level for occurrence of pixel shift.

Yoshiro Imai, Hirokazu Kawasaki, Yukio Hori, Shin’ichi Masuda
Combination of Wavelet Transform and Morphological Filtering for Enhancement of Magnetic Resonance Images

Brain tumor is an abnormal mass of tissue with uncoordinated growth inside the skull which may invade and damage nerves and other healthy tissues. Limitations posed by the image acquisition systems leads to the inaccurate analysis of magnetic resonance images (MRI) even by the skilled neurologists. This paper presents an improved framework for enhancement of cerebral MRI features by incorporating enhancement approaches of both the frequency and spatial domain. The proposed method requires de-noising, enhancement using a non-linear enhancement function in wavelet domain and then iterative enhancement algorithm using the morphological filter for further enhancing the edges is applied. A good enhancement of Region Of Interest(ROI) is obtained with the proposed method which is well portrayed by estimates of three quality metrics. Contrast improvement index (

CII

), peak signal to noise ratio

(PSNR)

and average signal to noise ratio

(ASNR)

.

Akansha Srivastava, Alankrita, Abhishek Raj, Vikrant Bhateja
Real-Time Feedback and 3D Visualization of Unmanned Aerial Vehicle for Low-Cost Development

The implication of emergence of Unmanned Aerial Vehicle (UAV) is to avoid human victim risks and to gain lower costs from real-scale of aerial vehicles. Visual aid in UAV is one of the important elements that allow pilots to control them from the ground. Yet, some other experts are still using 2D graphics to visualize UAV movements which limits the pilot in a single expression. However, many experts are still doing research on UAV regarding its better visualization technique, autonomous system, GPS synchronization, precision on telemetry and image processing. In this paper, we present an alternative methodology the development of real-time feedback visualization from UAV and 3D Graphical User Interface (GUI) control. It is transmitting feedback of UAV movements through the surveillance camera by installing tilt sensor on UAV model and receiving real-time environment of First Person View (FPV) of aircraft’s pilot. It is aimed in low-cost development and coming with 3D graphic environment and the interface uses DirectX SDK as graphics engine. With 3D view, pilot will be easier to obtain feedback of UAV which visualizes 3D mesh of UAV model.

Mohammad Sibghotulloh Ikhmatiar, Rathiah Hashim, Prima Adhi Yudistira, Tutut Herawan
Symmetry Based 2D Singular Value Decomposition for Face Recognition

Two-dimensional singular value decomposition (2DSVD) is latterly presented attempt to preserve the local nature of 2D face images, and at same time alleviate the computational complexity in standard singular value decomposition (1DSVD). Human face symmetry is also a profitable natural property of face images and allowed better feature extraction capability in face recognition. This paper introduces new method for face recognition coined as symmetry based two-dimensional singular value decomposition (S2DSVD), which relies on the strengths of both 2DSVD and human face symmetry. The proposed method offers two significant advantages over 2DSVD: improves the stability of feature extraction, and increases the valuable discriminative information, hence raising recognition accuracy. S2DSVD is compared to both 1DSVD and 2DSVD on two well-known databases. Experimental results show improvement in recognition accuracy over 2DSVD and superior to 1DSVD.

Falah E. Alsaqre, Saja Al-Rawi
The Role of the Feature Extraction Methods in Improving the Textural Model of the Hepatocellular Carcinoma, Based on Ultrasound Images

The non-invasive diagnosis of the malignant tumors is a very important issue in nowadays research. Our purpose is to elaborate computerized, texture-based methods for performing automatic recognition of the hepatocellular carcinoma, the most frequent malignant liver tumor, using only information from ultrasound images. We previously defined the textural model of HCC, consisting in the exhaustive set of the textural features, relevant for HCC characterization, and in their specific values for the HCC class. In this work, we improve the textural model and the classification process, through dimensionality reduction techniques. From the feature extraction methods, we implemented the most representative ones - Principal Component Analysis (PCA), Kernel PCA, Linear Discriminant Analysis (LDA), Generalized Discriminant Analysis (GDA) and combinations of these methods. We also assessed the combination of the feature extraction techniques with feature selection techniques. All these methods were evaluated for distinguishing HCC from the cirrhotic liver parenchyma on which it evolves.

Delia Mitrea, Sergiu Nedevschi, Mihai Socaciu, Radu Badea

Information and Data Management

A New Approach Based on NμSMV Model to Query Semantic Graph

The language most frequently used to represent the semantic graphs is the RDF (W3C standard for meta-modeling). The construction of semantic graphs is a source of numerous errors of interpretation. Processing of large semantic graphs can be a limit to use semantics in modern information systems. The work presented in this paper is part of a new research at the border between two areas: the semantic web and the model checking. For this, we developed a tool, RDF2N

μ

SMV, which converts RDF graphs into N

μ

SMV language. This conversion aims checking the semantic graphs with the model checker N

μ

SMV in order to verify the consistency of the data. The data integration and sharing activities carried on the framework of the Semantic Web lead to large knowledge databases that must be queried, analyzed, and exploited efficiently. Many representation languages of the knowledge of the Semantic Web, starting with RDF, are based on directed, labeled graphs, which can be also manipulated using graph algorithms and tools coming from other domains. In this paper, we propose an analysis approach of RDF graphs by reusing the verification technology developed for concurrent systems. To this purpose, we define a translation from the SPARQL query language into temporal logic query, a general-purpose graph manipulation language implemented in the ScaleSem verification toolbox. This translation makes it possible to extend the expressive power of SPARQL naturally by adding temporal logic formulas characterizing sequences, trees, or general sub-graphs of the RDF graph. Our approach exhibits a performance comparable to dedicated SPARQL query evaluation engines, as illustrated by experiments on large RDF graphs.

Mahdi Gueffaz, Sylvain Rampacek, Christophe Nicolle
A Public Health Information System Design

The integration of modern information technologies into health care resulted in development of public health care information systems for the purpose of aiding the medical community by providing for less error prone diagnoses and treatment of diseases. The architecture presented in this paper is an evidence-based health care information infrastructure which collects considerable amounts of reliable, evidence-based data over time from various sources, and that it classifies, interprets and makes the data readily available and accessible to the health care provider before patient consultations. Among many other goals, the proposed system’s key objective and contribution is gathering Patients’ Ancillary Data (PAD) and incorporating this information into the diagnosis and treatment workflow. The data from medical tests is enriched and complemented by patient ancillary data such as hereditary, residential, travel, custom, meteorological, biographical and demographical data. Automatic provisioning of PAD, another goal of the proposal, helps to diminish problems and misdiagnosis situations caused by language barriers-disorders that frequently prevent the acquisition of patient data. This attribute of the system assists physicians to shorten time for diagnosis and consultations, therefore dramatically improving quality and quantity of the physical examinations of patients.

Yasar Guneri Sahin, Ufuk Celikkan
Comparison between Neural Networks against Decision Tree in Improving Prediction Accuracy for Diabetes Mellitus

This study is to compare the prediction accuracy of multilayer perceptron in neural networks against tree-based algorithms, in particular the ID3 and J48 algorithms on Pima Indian diabetes mellitus data set. The classification experiment is performed using algorithms in WEKA to determine the class diabetes or non-diabetes with the data set of 768 patients. Results showed that a pruned J48 tree performed with higher accuracy, which is 89.3% as compared to 81.9% by the multilayer perceptrons. On further removal of the number of times pregnant attribute, the prediction accuracy for the pruned J48 tree improved to 89.7%.

Aliza Ahmad, Aida Mustapha, Eliza Dianna Zahadi, Norhayati Masah, Nur Yasmin Yahaya
Design and Development of Online Instructional Consultation for Higher Education in Malaysia

Nowadays, the use of communication technology in virtual discussion between students and lecturers become more easily and effectively. But there are some aspects that need attention, such as documentation, record log and the ability to see the records of these virtual relationships as most current virtual communications software component is focused on the communication and less on the process of before and after the communication.This paper focuses on design and development of Online Instructional Consultation (OICON) system for facilitating student-lecturer consultation for higher education mentor-mentee system in Malaysia. Researchers try to encounter with the typical consultation limitations such as distance constraints, time constraints and having no effective management of consultation-recorded document using the OICON. The general structure and modules of OICON system, multimedia communication components and communication server are illustrated and the potential benefits of OICON system are presented.

Ang Ling Weay, Abd Hadi Bin Abdul Razak
On the Multi-Agent Modelling of Complex Knowledge Society for Business and Management System Using Distributed Agencies

This work is motivated by need for a model that addresses the study of Knowledge Society for Business and Management in situations where conventional analysis is insufficient in describing the intricacies of realistic social phenomena and social actors. We use Distributed Agency methodology that requires the use of all available computational techniques and interdisciplinary theories. We use Data Mining and Neuro-Fuzzy System as an approach to discover and assign rules on agents that represent real-world companies. The case study is based on several companies in the region of Baja California, México and in the policies they implement to achieve greater competitiveness based on the development of knowledge.

Eduardo Ahumada-Tello, Manuel Castañón-Puga, Juan–Ramón Castro, Eugenio D. Suarez, Bogart–Yail Márquez, Carelia Gaxiola–Pacheco, Dora–Luz Flores
Backmatter
Metadaten
Titel
Digital Information Processing and Communications
herausgegeben von
Vaclav Snasel
Jan Platos
Eyas El-Qawasmeh
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-22389-1
Print ISBN
978-3-642-22388-4
DOI
https://doi.org/10.1007/978-3-642-22389-1