Skip to main content

Über dieses Buch

Innovations in Computing Sciences and Software Engineering includes a set of rigorously reviewed world-class manuscripts addressing and detailing state-of-the-art research projects in the areas of Computer Science, Software Engineering, Computer Engineering, and Systems Engineering and Sciences.

Topics Covered:

•Image and Pattern Recognition: Compression, Image processing, Signal Processing Architectures, Signal Processing for Communication, Signal Processing Implementation, Speech Compression, and Video Coding Architectures.

•Languages and Systems: Algorithms, Databases, Embedded Systems and Applications, File Systems and I/O, Geographical Information Systems, Kernel and OS Structures, Knowledge Based Systems, Modeling and Simulation, Object Based Software Engineering, Programming Languages, and Programming Models and tools.

•Parallel Processing: Distributed Scheduling, Multiprocessing, Real-time Systems, Simulation Modeling and Development, and Web Applications.

•Signal and Image Processing: Content Based Video Retrieval, Character Recognition, Incremental Learning for Speech Recognition, Signal Processing Theory and Methods, and Vision-based Monitoring Systems.

•Software and Systems: Activity-Based Software Estimation, Algorithms, Genetic Algorithms, Information Systems Security, Programming Languages, Software Protection Techniques, Software Protection Techniques, and User Interfaces.

•Distributed Processing: Asynchronous Message Passing System, Heterogeneous Software Environments, Mobile Ad Hoc Networks, Resource Allocation, and Sensor Networks.

•New trends in computing: Computers for People of Special Needs, Fuzzy Inference, Human Computer Interaction, Incremental Learning, Internet-based Computing Models, Machine Intelligence, Natural Language.



Recursive Projection Profiling for Text-Image Separation

This paper presents an efficient and very simple method for separating text characters from graphical images in a given document image. This is based on a Recursive Projection Profiling (RPP) of the document image. The algorithm tries to use the projection profiling method [4] [6] to its maximum bent to bring out almost all that is possible with the method. The projection profile reveals the empty space along the horizontal and vertical axes, projecting the gaps between the characters/images. The algorithm turned out to be quite efficient, accurate and least complex in nature. Though some exceptional cases were encountered owing to the drawbacks of projection profiling, they were well handled with some simple heuristics thus resulting in a very efficient method for text-image separation.

Shivsubramani Krishnamoorthy, R. Loganathan, K P Soman

Risk in the Clouds?: Security Issues Facing Government Use of Cloud Computing

Cloud computing is poised to become one of the most important and fundamental shifts in how computing is consumed and used. Forecasts show that government will play a lead role in adopting cloud computing – for data storage, applications, and processing power, as IT executives seek to maximize their returns on limited procurement budgets in these challenging economic times. After an overview of the cloud computing concept, this article explores the security issues facing public sector use of cloud computing and looks to the risk and benefits of shifting to cloud-based models. It concludes with an analysis of the challenges that lie ahead for government use of cloud resources.

David C. Wyld

Open Source Software (OSS) Adoption Framework for Local Environment and its Comparison

According to Business Software Alliance (BSA) Pakistan is ranked in the top 10 countries having highest piracy rate [1]. To overcome the problem of piracy local Information Technology (IT) companies are willing to migrate towards Open Source Software (OSS). Due to this reason need for framework/model for OSS adoption has become more pronounced.

Research on the adoption of IT innovations has commonly drawn on innovation adoption theory. However with time some weaknesses have been identified in the theory and it has been realized that the factors affecting the adoption of OSS varies country to country. The objective of this research is to provide a framework for OSS adoption for local environment and then compare it with the existing framework developed for OSS adoption in other advanced countries. This paper proposes a framework to understand relevant strategic issues and it also highlights problems, restrictions and other factors that are preventing organizations from adopting OSS. A factor based comparison of propose framework with the existing framework is provided in this research.

U. Laila, S. F. A. Bukhari

Ubiquitous Data Management in a Personal Information Environment

This paper presents a novel research work on Personal Information Environment (PIE), which is a relatively new field to get explored. PIE is a self managing pervasive environment. It contains an individual’s personal pervasive information associated within user’s related or non-related contextual environments. Contexts are vitally important because they control, influence and affect everything within them by dominating its pervasive content(s). This paper shows in depth the achievement of Personal Information Environment, which deals with a user’s devices, which are to be spontaneous, readily self-manageable on autonomic basis. This paper shows an actual implementation of pervasive data management of a PIE-user, which contains append and update of PIE’s data from the last device used by the user to another PIE devices for further processing and storage needs. Data recharging is utilized to transmit and receive data among PIE devices.

Atif Farid Mohammad

Semantics for the Asynchronous Communication in LIPS, a Language for Implementing Parallel/distributed Systems

This paper presents the operational semantics for the message passing system for a distributed language called LIPS. The message passing system is based on a virtual machine called AMPS(Asynchronous Message Passing System) designed around a data structure that is portable and can go with any distributed language. The operational semantics that specifies the behaviour of this system uses structured operational semantics to reveal the intermediate steps that helps with analysis of its behaviour. We are able combine this with the big-step semantics that specifies the computational part of the language to produce a cohesive semantics for the language as a whole.

Amala VijayaSelvi Rajan, Arumugam Siri Bavan, Geetha Abeysinghe

Separation of Concerns in Teaching Software Engineering

Software Engineering is one of the recently evolving subjects in research and education. Instructors and books that are talking about this field of study lack a common ground of what subjects should be covered in teaching introductory or advance courses in this area. In this paper, a proposed ontology for software engineering education is formulated. This ontology divides the software engineering projects and study into different perspectives: projects, products, people, process and tools. Further or deeper levels of abstractions of those fields can be described on levels that depend on the type or level of the course to teach.

The goal of this separation of concerns is to organize the software engineering project into smaller manageable parts that can be easy to understand and identify. It should reduce complexity and improve clarity. This concept is at the core of software engineering. The 4Ps concerns overlap and distinct. The research will try to point to the two sides. Concepts such as; ontology, abstraction, modeling and views or separation of concerns (which we are trying to do here) always include some sort of abstraction or focus. The goal is to draw a better image or understanding of the problem. In abstraction or modeling for example, when we model students in a university in a class, we list only relevant properties, meaning that there are many student properties that are ignored and not listed due to the fact that they are irrelevant to the domain. The weight, height, and color of the student are examples of such properties that will not be included in the class. In the same manner, the goal of the separation of the concerns in software engineering projects is to improve the understandability and consider only relevant properties. In another goal, we hope that the separation of concerns will help software engineering students better understand the large number of modeling and terminology concepts.

Izzat M. Alsmadi, Mahmoud Dieri

Student Model Based on Flexible Fuzzy Inference

In this paper we present a design of a student model based on generic fuzzy inference design. The membership functions and the rules of the fuzzy inference can be fine-tuned by the teacher during the learning process (run time) to suit the pedagogical needs, creating a more flexible environment. The design is used to represent the learner’s performance. In order to test the human computer interaction of the system, a prototype of the system was developed with limited teaching materials. The interaction with the first prototype of the system demonstrated the effectiveness of the decision making using fuzzy inference.

Dawod Kseibat, Ali Mansour, Osei Adjei, Paul Phillips

PlanGraph: An Agent-Based Computational Model for Handling Vagueness in Human-GIS Communication of Spatial Concepts

A fundamental challenge in developing a usable conversational interface for Geographic Information Systems (GIS) is effective communication of spatial concepts in natural language, which are commonly vague in meaning. This paper presents a design of an agent-based computational model,


. This model is able to help the GIS to keep track of the dynamic human-GIS communication context and enable the GIS to understand the meaning of a vague spatial concept under constrains of the dynamic context.

Hongmei Wang

Risk-Based Neuro-Grid Architecture for Multimodal Biometrics

Recent research indicates that multimodal biometrics is the way forward for a highly reliable adoption of biometric identification systems in various applications, such as banks, businesses, government and even home environments. However, such systems would require large distributed datasets with multiple computational realms spanning organisational boundaries and individual privacies.

In this paper, we propose a novel approach and architecture for multimodal biometrics that leverages the emerging grid information services and harnesses the capabilities of neural network as well. We describe how such a neuro-grid architecture is modelled with the prime objective of overcoming the barriers of biometric risks and privacy issues through flexible and adaptable multimodal biometric fusion schemes. On one hand, the model uses grid services to promote and simplify the shared and distributed resource management of multimodal biometrics, and on the other hand, it adopts a feed-forward neural network to provide reliability and risk-based flexibility in feature extraction and multimodal fusion, that are warranted for different real-life applications. With individual autonomy, scalability, risk-based deployment and interoperability serving the backbone of the neuro-grid information service, our novel architecture would deliver seamless and robust access to geographically distributed biometric data centres that cater to the current and future diverse multimodal requirements of various day-to-day biometric transactions.

Sitalakshmi Venkataraman, Siddhivinayak Kulkarni

A SQL-Database Based Meta-CASE System and its Query Subsystem

Meta-CASE systems simplify the creation of CASE (Computer Aided System Engineering) systems. In this paper, we present a meta-CASE system that provides a web-based user interface and uses an object-relational database system (ORDBMS) as its basis. The use of ORDBMSs allows us to integrate different parts of the system and simplify the creation of meta-CASE and CASE systems. ORDBMSs provide powerful query mechanism. The proposed system allows developers to use queries to evaluate and gradually improve artifacts and calculate values of software measures. We illustrate the use of the systems by using SimpleM modeling language and discuss the use of SQL in the context of queries about artifacts. We have created a prototype of the meta-CASE system by using PostgreSQL™ ORDBMS and PHP scripting language.

Erki Eessaar, Rünno Sgirka

An Intelligent Control System Based on Non-Invasive Man Machine Interaction

This paper presents further development of intelligent multi-agent based e-health care system for people with movement disabilities. The research results present further development of multi-layered model of this system with integration of fuzzy neural control of speed of two wheelchair type robots working in real time by providing movement support for disabled individuals. An approach of filtering of skin conductance (SC) signals using Nadaraya-Watson kernel regression smoothing for emotion recognition of disabled individuals is described and implemented in the system by R software tool. The unsupervised clustering by self organizing maps (SOM) of data sample of physiological parameters extracted from SC signals was proposed in order to reduce teacher noise as well as to increase of speed and accuracy of learning process of multi-layer perceptron (MLP) training.

Darius Drungilas, Antanas Andrius Bielskis, Vitalij Denisov

A UML Profile for Developing Databases that Conform to the Third Manifesto

The Third Manifesto (TTM) presents the principles of a relational database language that is free of deficiencies and ambiguities of SQL. There are database management systems that are created according to TTM. Developers need tools that support the development of databases by using these database management systems. UML is a widely used visual modeling language. It provides built-in extension mechanism that makes it possible to extend UML by creating profiles. In this paper, we introduce a UML profile for designing databases that correspond to the rules of TTM. We created the first version of the profile by translating existing profiles of SQL database design. After that, we extended and improved the profile. We implemented the profile by using UML CASE system StarUML™. We present an example of using the new profile. In addition, we describe problems that occurred during the profile development.

Erki Eessaar

Investigation and Implementation of T-DMB Protocol in NCTUns Simulator

Investigation of T-DMB protocol forced us to create simulation model. NCTUns simulator which is open source software and allows addition of new protocols was chosen for implementation. This is one of the first steps of research process. Here we would like to give small overview of T-DMB (DAB) system, describe proposed simulation model and problems which we have met during the work.

Tatiana Zuyeva, Adnane Cabani, Joseph Mouzna

Empirical Analysis of Case-Editing Approaches for Numeric Prediction

One important aspect of Case-Based Reasoning (CBR) is Case Selection or Editing – selection for inclusion (or removal) of cases from a case base. This can be motivated either by space considerations or quality considerations. One of the advantages of CBR is that it is equally useful for boolean, nominal, ordinal, and numeric prediction tasks. However, many case selection research efforts have focused on domains with nominal or boolean predictions. Most case selection methods have relied on such problem structure. In this paper, we present details of a systematic sequence of experiments with variations on CBR case selection. In this project, the emphasis has been on case quality – an attempt to filter out cases that may be noisy or idiosyncratic – that are not good for future prediction. Our results indicate that Case Selection can significantly increase the percentage of correct predictions at the expense of an increased risk of poor predictions in less common cases.

Michael A. Redmond, Timothy Highley

Towards a Transcription System of Sign Language for 3D Virtual Agents

Accessibility is a growing concern in computer science. Since virtual information is mostly presented visually, it may seem that access for deaf people is not an issue. However, for prelingually deaf individuals, those who were deaf since before acquiring and formally learn a language, written information is often of limited accessibility than if presented in signing. Further, for this community, signing is their language of choice, and reading text in a spoken language is akin to using a foreign language. Sign language uses gestures and facial expressions and is widely used by deaf communities. To enabling efficient production of signed content on virtual environment, it is necessary to make written records of signs. Transcription systems have been developed to describe sign languages in written form, but these systems have limitations. Since they were not originally designed with computer animation in mind, in general, the recognition and reproduction of signs in these systems is an easy task only to those who deeply know the system. The aim of this work is to develop a transcription system to provide signed content in virtual environment. To animate a virtual avatar, a transcription system requires explicit enough information, such as movement speed, signs concatenation, sequence of each hold-and-movement and facial expressions, trying to articulate close to reality. Although many important studies in sign languages have been published, the transcription problem remains a challenge. Thus, a notation to describe, store and play signed content in virtual environments offers a multidisciplinary study and research tool, which may help linguistic studies to understand the sign languages structure and grammar.

Wanessa Machado do Amaral, José Mario De Martino

Unbiased Statistics of a Constraint Satisfaction Problem – a Controlled-Bias Generator

We show that estimating the complexity (mean and distribution) of the instances of a fixed size Constraint Satisfaction Problem (CSP) can be very hard. We deal with the main two aspects of the problem: defining a measure of complexity and generating random unbiased instances. For the first problem, we rely on a general framework and a measure of complexity we presented at CISSE08. For the generation problem, we restrict our analysis to the Sudoku example and we provide a solution that also explains why it is so difficult.

Denis Berthier

Factors that Influence the Productivity of Software Developers in a Developer View

To measure and improve the productivity of software developers is one of the greatest challenges faced by software development companies. Therefore, aiming to help these companies to identify possible causes that interfere in the productivity of their teams, we present in this paper a list of 32 factors, extracted from the literature, that influence the productivity of developers. To obtain the ranking of these factors, we have applied a questionnaire with developers. In this work, we present the results: the factors that have the greatest positive and negative influence on productivity, the factors with no influence and the most important factors and what influences them. To finish, we present a comparison of the results obtained from the literature.

Edgy Paiva, Danielly Barbosa, Roberto Lima, Adriano Albuquerque

Algorithms for Maintaining a Consistent Knowledge Base in Distributed Multiagent Environments

In this paper, we design algorithms for a system that allows Semantic Web agents to reason within what has come to be known as the Web of Trust. We integrate reasoning about belief and trust, so agents can reason about information from different sources and deal with contradictions. Software agents interact to support users who publish, share and search for documents in a distributed repository. Each agent maintains an individualized topic taxonomy for the user it represents, updating it with information obtained from other agents. Additionally, an agent maintains and updates trust relationships with other agents.

When new information leads to a contradiction, the agent performs a belief revision process informed by a degree of belief in a statement and the degree of trust an agent has for the information source.

The system described has several key characteristics. First, we define a formal language with well-defined semantics within which an agent can express the relevant conditions of belief and trust, and a set of inference rules. The language uses symbolic labels for belief and trust intervals to facilitate expressing inexact statements about subjective epistemic states. Second, an agent’s belief set at a given point in time is modeled using a Dynamic Reasoning System (DRS). This allows the agent’s knowledge acquisition and belief revision processes to be expressed as activities that take place in time. Third, we explicitly describe reasoning processes, creating algorithms for acquiring new information and for belief revision.

Stanislav Ustymenko, Daniel G. Schwartz

Formal Specifications for a Document Management Assistant

The concept of a

dynamic reasoning system

(DRS) provides a general framework for modeling the reasoning processes of a mechanical agent, to the extent that those processes follow the rules of some well-defined logic. It amounts to an adaptation of the classical notion of a formal logical system that explicitly portrays reasoning as an activity that takes place in


. Inference rule applications occur in discrete time steps, and, at any given point in time, the

derivation path

comprises the agent’s

belief set

as of that time. Such systems may harbor inconsistencies, but these do not become known to the agent until a contradictory assertion appears in the derivation path. When this occurs one invokes a Doyle-like reason maintenance process to remove the inconsistency, in effect, disbelieving some assertions that were formerly believed. The notion of a DRS also includes an extralogical control mechanism that guides the reasoning process. This reflects the agent’s




and is context dependent. This paper lays out the formal definition of a DRS and illustrates it with the case of ordinary first-order predicate calculus, together with a control mechanism suitable for reasoning about taxonomic classifications for documents in a library. As such, this particular DRS comprises formal specifications for an agent that serves as a document management assistant.

Daniel G. Schwartz

Towards a Spatial-Temporal Processing Model

This paper discusses architecture for creating systems that need to express complex models of real world entities, especially those that exist in hierarchical and composite structures. These models need to be persisted, typically in a database system. The models also have a strong orthogonal requirement to support representation and reasoning over time.

Jonathan B. Lori

Structure, Context and Replication in a Spatial-Temporal Architecture

This paper examines some general aspects of partitioning software architecture and the structuring of complex computing systems. It relates these topics in terms of the continued development of a generalized processing model for spatial-temporal processing. Data partitioning across several copies of a generic processing stack is used to implement horizontal scaling by reducing search space and enabling parallel processing. Temporal partitioning is used to provide fast response to certain types of queries and in quickly establishing initial context when using the system.

Jonathan Lori

Service Oriented E-Government

Due to different directives, the growing request for citizen-orientation, improved service quality, effectiveness, efficiency, transparency and reduction of costs, as well as administrative burden public administrations apply increasingly management tools and IT for continual service development and sustainable citizens’ satisfaction. Therefore public administrations implement always more standard based management systems, such as quality ISO9001, environmental ISO 14001 or others. Due to this situation we used in different case studies as basis for e-government a the administration adapted, holistic administration management model to analyze stakeholder requirements and to integrate, harmonize and optimize services, processes, data, directives, concepts and forms. In these case studies the developed and consequently implemented holistic administration management model promotes constantly over more years service effectiveness, citizen satisfaction, efficiency, cost reduction, shorter initial training periods for new collaborators, employee involvement for sustainable citizen-oriented service improvement and organizational development.

Margareth Stoll, Dr. Dietmar Laner

Fuzzy-rule-based Adaptive Resource Control for Information Sharing in P2P Networks

With more and more peer-to-peer (P2P) technologies available for online collaboration and information sharing, people can launch more and more collaborative work in online social networks with friends, colleagues, and even strangers. Without face-to-face interactions, the question of who can be trusted and then share information with becomes a big concern of a user in these online social networks. This paper introduces an adaptive control service using fuzzy logic in preference definition for P2P information sharing control, and designs a novel decision-making mechanism using formal fuzzy rules and reasoning mechanisms adjusting P2P information sharing status following individual users’ preferences. Applications of this adaptive control service into different information sharing environments show that this service can provide a convenient and accurate P2P information sharing control for individual users in P2P networks.

Zhengping Wu, Hao Wu

Challenges in Web Information Retrieval

The major challenge in information access is the rich data available for information retrieval, evolved to provide principle approaches or strategies for searching. The search has become the leading paradigm to find the information on World Wide Web. For building the successful web retrieval search engine model, there are a number of challenges that arise at the different levels where techniques, such as Usenet, support vector machine are employed to have a significant impact. The present investigations explore the number of problems identified its level and related to finding information on web. This paper attempts to examine the issues by applying different methods such as web graph analysis, the retrieval and analysis of newsgroup postings and statistical methods for inferring meaning in text. We also discuss how one can have control over the vast amounts of data on web, by providing the proper address to the problems in innovative ways that can extremely improve on standard. The proposed model thus assists the users in finding the existing formation of data they need. The developed information retrieval model deals with providing access to information available in various modes and media formats and to provide the content is with facilitating users to retrieve relevant and comprehensive information efficiently and effectively as per their requirements. This paper attempts to discuss the parameters factors that are responsible for the efficient searching. These parameters can be distinguished in terms of important and less important based on the inputs that we have. The important parameters can be taken care of for the future extension or development of search engines

Monika Arora, Uma Kanjilal, Dinesh Varshney

An Invisible Text Watermarking Algorithm using Image Watermark

Copyright protection of digital contents is very necessary in today’s digital world with efficient communication mediums as internet. Text is the dominant part of the internet contents and there are very limited techniques available for text protection. This paper presents a novel algorithm for protection of plain text, which embeds the logo image of the copyright owner in the text and this logo can be extracted from the text later to prove ownership. The algorithm is robust against content-preserving modifications and at the same time, is capable of detecting malicious tampering. Experimental results demonstrate the effectiveness of the algorithm against tampering attacks by calculating normalized hamming distances. The results are also compared with a recent work in this domain

Zunera Jalil, Anwar M. Mirza

A Framework for RFID Survivability Requirement Analysis and Specification

Many industries are becoming dependent on Radio Frequency Identification (RFID) technology for inventory management and asset tracking. The data collected about tagged objects though RFID is used in various high level business operations. The RFID system should hence be highly available, reliable, and dependable and secure. In addition, this system should be able to resist attacks and perform recovery in case of security incidents. Together these requirements give rise to the notion of a survivable RFID system. The main goal of this paper is to analyze and specify the requirements for an RFID system to become survivable. These requirements, if utilized, can assist the system in resisting against devastating attacks and recovering quickly from damages. This paper proposes the techniques and approaches for RFID survivability requirements analysis and specification. From the perspective of system acquisition and engineering, survivability requirement is the important first step in survivability specification, compliance formulation, and proof verification.

Yanjun Zuo, Malvika Pimple, Suhas Lande

The State of Knowledge Management in Czech Companies

In the globalised world, Czech economy faces many challenges brought by the processes of integration. The crucial factors for companies that want to succeed in the global competition are knowledge and abilities to use the knowledge in the best possible way. The purpose of the work is a familiarization with the results of a questionnaire survey with the topic of "Research of the state of knowledge management in companies in the Czech Republic" realized in the spring 2009 in the cooperation of the University of Hradec Králové and the consulting company

Per Partes Consulting



under the patronage of the European Union.

P. Maresova, M. Hedvicakova

A Suitable Software Process Improvement Model for the UK Healthcare Industry

Over the recent years, the UK Healthcare sector has been the prime focus of many reports and industrial surveys, particularly in the field of software development and management issues. This signals the importance of growing concerns regarding quality issues in the Healthcare domain. In response to this, a new tailored Healthcare Process Improvement (SPI) model is proposed, which takes into consideration both signals from the industry and insights from literature.

This paper discusses and outlines the development of a new software process assessment and improvement model based on ISO/IEC 15504-5 model. The proposed model will provide the Healthcare sector with newly specific process practices that focus on addressing current development concerns, standard compliances and quality dimension requirements for this domain.

Tien D. Nguyen, Hong Guo, Raouf N.G. Naguib

Exploring User Acceptance of FOSS: The Role of the Age of the Users

Free and open source software (FOSS) movement essentially arises like answer to the evolution occurred in the market from the software, characterized by the closing of the source code. Furthermore, some FOSS characteristics, such as (1) the advance of this movement and (2) the attractiveness that contributes the voluntary and cooperative work, have increased the interest of the users towards free software. Traditionally, research in FOSS has focused on identifying individual personal motives for participating in the development of a FOSS project, analyzing specific FOSS solutions, or the FOSS movement itself. Nevertheless, the advantages of the FOSS for users and the effect of the demographic dimensions on user acceptance for FOSS have been two research topics with little attention. Specifically, this paper’s aim is to focus on the influence of the users´ age with FOSS the FOSS acceptance. Based on the literature, users´ age is an essential demographic dimension for explaining the Information Systems acceptance. With this purpose, the authors have developed a research model based on the Technological Acceptance Model (TAM).

M. Dolores Gallego, Salvador Bueno

GFS Tuning Algorithm Using Fuzzimetric Arcs

Evolutionary learning and tuning mechanism to fuzzy systems is the main concern to researchers in the filed. The optimized final performance on the fuzzy system is dependent on the ability of the system to find the best optimized rule-set(s) as well as the optimized fuzzy variable definition. This paper proposes a mechanism of selection and optimization of fuzzy variables termed as “Fuzzimetric Arcs” and then discusses how this mechanism can become a standard of selection and optimization of fuzzy set shapes to tune the performance of GFS. Genetic algorithm is the technique that can be utilized to alter/modify the initial shape of fuzzy sets using two main operators (Crossover and Mutation). Optimization of rule-set(s) is mainly dependent on the measurement of fitness factor and the level of deviation from fitness factor.

Issam Kouatli

Multi-step EMG Classification Algorithm for Human-Computer Interaction

A three-electrode human-computer interaction system, based on digital processing of the Electromyogram (EMG) signal, is presented. This system can effectively help disabled individuals paralyzed from the neck down to interact with computers or communicate with people through computers using point-and-click graphic interfaces. The three electrodes are placed on the right frontalis, the left temporalis and the right temporalis muscles in the head, respectively. The signal processing algorithm used translates the EMG signals during five kinds of facial movements (left jaw clenching, right jaw clenching, eyebrows up, eyebrows down, simultaneous left & right jaw clenching) into five corresponding types of cursor movements (left, right, up, down and left-click), to provide basic mouse control. The classification strategy is based on three principles: the EMG energy of one channel is typically larger than the others during one specific muscle contraction; the spectral characteristics of the EMG signals produced by the frontalis and temporalis muscles during different movements are different; the EMG signals from adjacent channels typically have correlated energy profiles. The algorithm is evaluated on 20 pre-recorded EMG signal sets, using Matlab simulations. The results show that this method provides improvements and is more robust than other previous approaches.

Peng Ren, Armando Barreto, Malek Adjouadi

Affective Assessment of a Computer User through the Processing of the Pupil Diameter Signal

This study proposes to achieve the affective assessment of a computer user through the processing of the pupil diameter (PD) signal. An adaptive interference canceller (AIC) system using the H

time-varying (HITV) adaptive algorithm was developed to minimize the impact of the PLR (pupil size changes caused by light intensity variations) on the measured pupil diameter signal. The modified pupil diameter (MPD) signal, obtained from the AIC, was expected to reflect primarily the pupillary affective responses (PAR) of the subject. Additional manipulations of the AIC output resulted in a Processed MPD (PMPD) signal, from which a classification feature, “PMPDmean”, was extracted. This feature was used to train and test a support vector machine (SVM), for the identification of “stress” states in the subject, achieving an accuracy rate of 77.78%. The advantages of affective recognition through the PD signal were verified by comparatively investigating the classification of “stress” and “relaxation” states through features derived from the simultaneously recorded galvanic skin response (GSR) and blood volume pulse (BVP) signals, with and without the PD feature. Encouraging results in affective assessment based on pupil diameter monitoring were obtained in spite of intermittent illumination increases purposely introduced during the experiments. Therefore, these results confirmed the possibility of using PD monitoring to evaluate the evolving affective states of a computer user.

Ying Gao, Armando Barreto, Malek Adjouadi

MAC, A System for Automatically IPR Identification, Collection and Distribution

Controlling Intellectual Property Rights (IPR) in the Digital World is a very hard challenge. The facility to create multiple bit-by-bit identical copies from original IPR works creates the opportunities for digital piracy. One of the most affected industries by this fact is the Music Industry. The Music Industry has supported huge losses during the last few years due to this fact. Moreover, this fact is also affecting the way that music rights collecting and distributing societies are operating to assure a correct music IPR identification, collection and distribution. In this article a system for automating this IPR identification, collection and distribution is presented and described. This system makes usage of advanced automatic audio identification system based on audio fingerprinting technology. This paper will present the details of the system and present a use-case scenario where this system is being used.

Carlos Serrão

Testing Distributed ABS System with Fault Injection

The paper deals with the problem of adapting software implemented fault injection technique (SWIFI) to evaluate dependability of reactive microcontroller systems. We present an original methodology of disturbing controller operation and analyzing fault effects taking into account reactions of the controlled object and the impact of the system environment. Faults can be injected randomly (in space and time) or targeted at the most sensitive elements of the controller to check it at high stresses. This approach allows identifying rarely encountered problems, usually missed in classical approaches. The developed methodology has been used successfully to verify dependability of ABS system. Experimental results are commented in the paper.

Dawid Trawczyński, Janusz Sosnowski, Piotr Gawkowski

Learning Based Approach for Optimal Clustering of Distributed Program's Call Flow Graph

Optimal clustering of call flow graph for reaching maximum concurrency in execution of distributable components is one of the NP-Complete problems. Learning automatas (LAs) are search tools which are used for solving many NP-Complete problems. In this paper a learning based algorithm is proposed to optimal clustering of call flow graph and appropriate distributing of programs in network level. The algorithm uses learning feature of LAs to search in state space. It has been shown that the speed of reaching to solution increases remarkably using LA in search process, and it also prevents algorithm from being trapped in local minimums. Experimental results show the superiority of proposed algorithm over others.

Yousef Abofathi, Bager Zarei, Saeed Parsa

Fuzzy Adaptive Swarm Optimization Algorithm for Discrete Environments

The heuristic methods have been widely developed for solution of complicated optimization methods. Recently hybrid methods that are based on combination of different approaches have shown more potential in this regard. Fuzzy simulation and Particle Swarm Optimization algorithm are integrated to design a hybrid intelligent algorithm to solve the np-hard problem such as travelling salesman problem in efficient and faster way of solutions. The results obtained with the proposed method show its potential in achieving both accuracy and speed in small and medium size problems, compared to many advanced methods.

M. Hadi Zahedi, M. Mehdi S.Haghighi

Project Management Software for Distributed Industrial Companies

This paper gives an overview of the development of a new software solution for project management, intended mainly to use in industrial environment. The main concern of the proposed solution is application in everyday engineering practice in various, mainly distributed industrial companies. Having this in mind, special care has been devoted to development of appropriate tools for tracking, storing and analysis of the information about the project, and in-time delivering to the right team members or other responsible persons. The proposed solution is Internet-based and uses LAMP/WAMP (Linux or Windows - Apache - MySQL - PHP) platform, because of its stability, versatility, open source technology and simple maintenance. Modular structure of the software makes it easy for customization according to client specific needs, with a very short implementation period. Its main advantages are simple usage, quick implementation, easy system maintenance, short training and only basic computer skills needed for operators.

M. Dobrojević, B. Medjo, M. Rakin, A. Sedmak

How to Construct an Automated Warehouse Based on Colored Timed Petri Nets

The automated warehouse considered here consists of a number of rack locations with three cranes, a narrow aisle shuttle, and several buffer stations with the roller. Based on analyzing of the behaviors of the active resources in the system, a modular and computerized model is presented via a colored timed Petri net approach, in which places are multicolored to simplify model and characterize control flow of the resources, and token colors are defined as the routes of storage/retrieval operations. In addition, an approach for realization of model via visual c++ is briefly given. These facts allow us to render an emulate system to simulate a discrete control application for online monitoring, dynamic dispatching control and off-line revising scheduler policies.

Fei Cheng, Shanjun He

Telecare and Social Link Solution for Ambient Assisted Living Using a Robot Companion with Visiophony

An increasing number of people are in need of help at home (elderly, isolated and/or disabled persons; people with mild cognitive impairment). Several solutions can be considered to maintain a social link while providing tele-care to these people. Many proposals suggest the use of a robot acting as a companion. In this paper we will look at an environment constrained solution, its drawbacks (such as latency) and its advantages (flexibility, integration…). A key design choice is to control the robot using a unified Voice over Internet Protocol (VoIP) solution, while addressing bandwidth limitations, providing good communication quality and reducing transmission latency

Thibaut Varène, Paul Hillereau, Thierry Simonnet

Contextual Semantic: A Context-aware Approach for Semantic Web Based Data Extraction from Scientific Articles

The paper explores whether the semantic context is good enough to cope with ever increasing number of available resources in different repositories including the web. Here a problem of identifying authors of scientific papers is used as an example. A set of problem still do arise in case we apply exclusively the semantic context. Fortunately contextual semantic can be used to derive more information required to separate ambiguous cases. Semantic tags, well-structured documents and available databases of articles do provide a possibility to be more context-aware. Under the context we use co-authors names, references and headers to extract key-words and identify the subject. The real complexity of the considering problem comes from the dynamical behaviour of authors as they can change the topic of the research in the next paper. As the final judge the paper proposes applying words usage patterns analysis. Final the contextual intelligence engine is described.

Deniss Kumlander

Motivating Company Personnel by Applying the Semi-self-organized Teams Principle

The only way nowadays to improve stability of software development process in the global rapidly evolving world is to be innovative and involve professionals into projects motivating them using both material and non material factors. In this paper self-organized teams are discussed. Unfortunately not all kind of organizations can benefit directly from agile method including applying self-organized teams. The paper proposes semi-self-organized teams presenting it as a new and promising motivating factor allowing deriving many positive sides of been self-organized and partly agile and been compliant to less strict conditions for following this innovating process. The semi-self organized teams are reliable at least in the short-term perspective and are simple to organize and support.

Deniss Kumlander

Route Advising in a Dynamic Environment – A High-Tech Approach

Finding the optimal path between two locations in the Colombo city is not a straight forward task, because of the complex road system and the huge traffic jams etc. This paper presents a system to find the optimal driving direction between two locations within the Colombo city, considering road rules (one way, two ways or fully closed in both directions). The system contains three main modules – core module, web module and mobile module, additionally there are two user interfaces one for normal users and the other for administrative users. Both these interfaces can be accessed using a web browser or a GPRS enabled mobile phone. The system is developed based on the Geographic Information System (GIS) technology. GIS is considered as the best option to integrate hardware, software, and data for capturing, managing, analyzing, and displaying all forms of geographically referenced information. The core of the system is MapServer (MS4W) used along with the other supporting technologies such as PostGIS, PostgreSQL, pgRouting, ASP.NET and C#.

M F M Firdhous, D L Basnayake, K H L Kodithuwakku, N K Hatthalla, N W Charlin, P M R I K Bandara

Building Security System Based on Grid Computing To Convert and Store Media Files

In the recent years, Grid Computing (GC) has made big steps in development, contributing to solve practical problems which need large store capacity and computing performance. This paper introduces an approach to integrate the security mechanisms, Grid Security Infrastructure (GSI) of the open source Globus Toolkit 4.0.6 (GT4) into an application for storing, format converting and playing online media files, based on the GC.

Hieu Nguyen Trung, Duy Nguyen, Tan Dang Cao

A Tool Supporting C code Parallelization

In this paper a tool, called ParaGraph, supporting C code parallelization is presented. ParaGraph is a plug-in in Eclipse IDE and enables manual and automatic parallelization. A parallelizing compiler inserts automatically OpenMP directives into the outputted source code. OpenMP directives can be also manually inserted by a programmer. ParaGraph shows C code after parallelization. Visualization of parallelized code can be used to understand the rules and constraints of parallelization and to tune the parallelized code as well.

Ilona Bluemke, Joanna Fugas

Extending OpenMP for Agent Based DSM on GRID

This paper discusses some of the salient issues involved in implementing the illusion of a shared-memory programming model across a group of distributed memory processors from a cluster through to an entire GRID. This illusion can be provided by a distributed shared memory (DSM) system implemented by using autonomous agents.

Mechanisms that have the potential to increase the performance by omitting consistency latency intra site messages and data transfers are highlighted.

In this paper we describe the overall design/architecture of a prototype system, AOMPG which integrates DSM and Agent paradigms and may be the target of an OpenMP compiler. Our goal is to apply this to GRID Applications.

Mahdi S. Haghighi, M. Hadi Zahedi, A Mustafa Ghazizadeh, Farnad Ahangary

Mashup – Based End User Interface for Fleet Monitoring

Fleet monitoring of commercial vehicles has received a major attention in the last period. A good monitoring solution increases the fleet efficiency by reducing the transportation durations, by optimizing the planned routes and by providing determinism at the intermediate and final destinations. This paper presents a fleet monitoring system for commercial vehicles using the Internet as data infrastructure. The mashup concept was implemented for creating a user interface.

M. Popa, A.S. Popa, T. Slavici, D. Darvasi

The Performance of Geothermal Field Modeling in Distributed Component Environment

An implementation and performance analysis of heat transfer modeling using most popular component environments is a scope of this article. The computational problem is described, and the proposed solution of decomposition for parallelization is shown. The implementation is prepared for MS .NET, Sun Java and Mono. Tests are done for various operating systems and hardware platform combinations. The performance of calculations is experimentally indicated and analyzed. The most interesting issue is the communication tuning in distributed component software – proposed method can speed up computational time, but the final time depends also on the network connections performance in component environments. These results are presented and discussed.

A. Piórkowski, A. Pięta, A. Kowal, T. Danek

An Extension of Least Squares Methods for Smoothing Oscillation of Motion Predicting Function

A novel hybrid technique for detection and predicting the motion of objects in video stream is presented in this paper. The novelty consists in extension of Savitzky-Golay smoothing filter applying difference approach for tracing object mass center with or without acceleration in noised images. The proposed adaptation of least squares methods for smoothing the fast varying values of motion predicting function permits to avoid the oscillation of that function with the same degree of used polynomial. The better results are obtained when the time of motion interpolation is divided into subintervals, and the function is represented by different polynomials over each subinterval. Therefore, in proposed hybrid technique the spatial clusters with objects in motion are detected by the image difference operator and behavior of those clusters is analyzed using their mass centers in consecutive frames. Then the predicted location of object is computed using modified algorithm of weighted least squares model. That provides the tracing possible routes which now are invariant to oscillation of predicting polynomials and noise presented in images. For irregular motion frequently occurred in dynamic scenes, the compensation and stabilization technique is also proposed in this paper. On base of several simulated kinematics experiments the efficiency of proposed technique is analyzed and evaluated.

O. Starostenko, J.T. Tello-Martínez, V. Alarcon-Aquino, J. Rodriguez-Asomoza, R. Rosas-Romero

Security of Virtualized Applications: Microsoft App-V and VMware ThinApp

Virtualization has gained great popularity in recent years with application virtualization being the latest trend. Application virtualization offers several benefits for application management, especially for larger and dynamic deployment scenarios. In this paper, we initially introduce the common application virtualization principles before we evaluate the security of Microsoft App-V and VMware ThinApp application virtualization environments with respect to external security threats. We compare different user account privileges and levels of sandboxing for virtualized applications. Furtherwmore, we identify the major security risks as well as trade-offs with ease of use that result from the virtualization of applications.

Michael Hoppe, Patrick Seeling

Noise Performance of a Finite Uniform Cascade of Two-Port Networks

The noise performance of a lumped passive uniform cascade of identical


two port networks is investigated. The


-block network is characterized in close form based on the eigenvalues of the element two-port


transmission matrix. The thermal noise performance is derived and demonstrated in several examples.

Shmuel Y. Miller

Evaluating Software Agent Quality: Measuring Social Ability and Autonomy

Research on agent-oriented software has been developed around different practical applications. The same cannot, however, be said about the development of measures to evaluate agent quality by its key characteristics. In some cases, there have been proposals to use and adapt measures from other paradigms, but no agent-related quality model has been investigated. As part of research into agent quality, this paper presents the evaluation of two key characteristics: social ability and autonomy. Additionally, we present some results for a case study on a multi-agent system.

Fernando Alonso, José L. Fuertes, Loïc Martínez, Héctor Soza

An Approach to Measuring Software Quality Perception

Perception measuring and perception management is an emerging approach in the area of product management. Cognitive, psychological, behavioral and neurological theories, tools and methods are being employed for a better understanding of the mechanisms of a consumer’s attitude and decision processes. Software is also being defined as a product, however this kind of product is significantly different from all other products. Software products are intangible and it is difficult to trace their characteristics which are strongly dependant on a dynamic context of use.

Understanding customer’s cognitive processes gives an advantage to the producers aiming to develop products “winning the market”. Is it possible to adopt theories, methods and tools for the purpose of software perception, especially software quality perception? The theoretical answer to this question seems to be easy, however in practice the list of differences between software products and software projects hinders the analysis of certain factors and their influence on the overall perception.

In this article the authors propose a method and describe a tool designed for the purpose of research regarding perception issues of software quality. The tool is designed to defeat the above stated problem, adopting the modern behavioral economics approach.

Radoslaw Hofman

Automatically Modeling Linguistic Categories in Spanish

This paper presents an approach to process Spanish linguistic categories automatically. The approach is based in a module of a prototype named WIH (Word Intelligent Handler), which is a project to develop a conversational bot. It basically learns category usage sequence in a sentence. It extracts a weighting metric to discriminate most common structures in real dialogs. Such a metric is important to define the preferred organization to be used by the robot to build an answer.

M. D. López De Luise, D. Hisgen, M. Soffer

Efficient Content-based Image Retrieval using Support Vector Machines for Feature Aggregation

In this paper, a content-based image retrieval system for aggregation and combination of different image features is presented. Feature aggregation is important technique in general content-based image retrieval systems that employ multiple visual features to characterize image content. We introduced and evaluated linear combination and support vector machines to fuse the different image features. The implemented system has several advantages over the existing content-based image retrieval systems. Several implemented features included in our system allow the user to adapt the system to the query image. The SVM-based approach for ranking retrieval results helps processing specific queries for which users do not have knowledge about any suitable descriptors.

Ivica Dimitrovski, Suzana Loskovska, Ivan Chorbev

The Holistic, Interactive and Persuasive Model to Facilitate Self-care of Patients with Diabetes

The patient, in his multiple facets of citizen and user of services of health, needs to acquire during, and later in his majority of age, favorable conditions of health to accentuate his quality of life and it is the responsibility of the health organizations to initiate the process of support for that patient during the process of mature life. The provision of services of health and the relation doctor-patient are undergoing important changes in the entire world, forced to a large extent by the indefensibility of the system itself. Nevertheless decision making requires previous information and, what more the necessity itself of being informed requires having a “culture” of health that generates pro activity and the capacity of searching for instruments that facilitate the awareness of the suffering and the self-care of the same. Therefore it is necessary to put into effect a ICT model (hiPAPD) that has the objective of causing Interaction, Motivation and Persuasion towards the surroundings of the diabetic Patient facilitating his self-care. As a result the patient himself individually manages his services through devices and AmI Systems (Ambient Intelligent).

Miguel Vargas-Lombard, Armando Jipsion, Rafael Vejarano, Ismael Camargo, Humberto Álvarez, Elena Villalba Mora, Ernestina Menasalva Ruíz

Jawi Generator Software Using ARM Board Under Linux

Jawi knowledge is becoming important not just to adult but also to growing children to learn at the initial stage of their life. This project basically is to study and develop Embedded Jawi Generator Software that will generate and create Jawi script easily. The user could choose and enter Jawi scripts and learn each of the script. The sum of scripts was from alif until yaa, approximately about 36 scripts with the colorful button. The system also should b creating as an interactive system that will attract users especially kids. This Jawi Generator Software developed using java language in a Linux operating system (fedora). This software will be running on UP-NETARM2410-S Linux. Later the performances of Jawi Generator System are investigate for its accuracy in displaying of the words, and also board performances.

O. N. Shalasiah, S. M. F Rohani, H. Zulkifli

Efficient Comparison between Windows and Linux Platform Applicable in a Virtual Architectural Walkthrough Application

This paper describes Linux, an open source platform used to develop and run a virtual architectural walkthrough application. It proposes some qualitative reflections and observations on the nature of Linux in the concept of Virtual Reality (VR) and on the most popular and important claims associated with the open source approach. The ultimate goal of this paper is to measure and evaluate the performance of Linux used to build the virtual architectural walkthrough and develop a proof of concept based on the result obtain through this project. Besides that, this study reveals the benefits of using Linux in the field of virtual reality and reflects a basic comparison and evaluation between Windows and Linux base operating system. Windows platform is use as a baseline to evaluate the performance of Linux. The performance of Linux is measured based on three main criteria which is frame rate, image quality and also mouse motion.

P. Thubaasini, R. Rusnida, S. M. Rohani

Simulation-Based Stress Analysis for a 3D Modeled Humerus-Prosthesis Assembly

The development of mechanical models of the humerus-prosthesis assembly represent a solution for analyzing the behavior of prosthesis devices under different conditions; some of these behaviors are impossible to reproduce in vivo due to the irreversible phenomenon that can occur. This paper presents a versatile model of the humerus-prosthesis assembly. The model is used for stress analysis and displacement distributions under different configurations that correspond to possible in vivo implementations later on. A 3D scanner was used to obtain the virtual model of the humerus bone. The endoprosthesis was designed using 3D modeling software and the humerus-prosthesis assembly was analyzed using Finite Element Analysis software.

S. Herle, C. Marcu, H. Benea, L. Miclea, R. Robotin

Chaos-Based Bit Planes Image Encryption

Bit planes of discrete signal can be used not only for encoding or compressing, but also for encrypting purposes. This paper investigates composition of bit planes of an image and their utilization in the encryption process. Proposed encryption scheme is based on chaotic maps of Peter de Jong and it is designed for image signals primarily. Positions of all components of bit planes are permutated according to chaotic behaviour of Peter de Jong’s system.

Jiri Giesl, Tomas Podoba, Karel Vlcek

FLEX: A Modular Software Architecture for Flight License Exam

This paper is about the design and implementation of an examination system based on World Wide Web. It is called FLEX-Flight License Exam Software. We designed and implemented flexible and modular software architecture. The implemented system has basic specifications such as appending questions in system, building exams with these appended questions and making students to take these exams. There are three different types of users with different authorizations. These are system administrator, operators and students. System administrator operates and maintains the system, and also audits the system integrity. The system administrator can not be able to change the result of exams and can not take an exam. Operator module includes instructors. Operators have some privileges such as preparing exams, entering questions, changing the existing questions and etc. Students can log on the system and can be accessed to exams by a certain URL. The other characteristic of our system is that operators and system administrator are not able to delete questions due to the security problems. Exam questions can be inserted on their topics and lectures in the database. Thus; operators and system administrator can easily choose questions. When all these are taken into consideration, FLEX software provides opportunities to many students to take exams at the same time in safe, reliable and user friendly conditions. It is also reliable examination system for the authorized aviation administration companies. Web development platform – LAMP; Linux, Apache web server, MySQL, Object-oriented scripting Language – PHP are used for developing the system and page structures are developed by Content Management System – CMS.

Taner Arsan, Hamit Emre Saka, Ceyhun Sahin

Enabling and Integrating Distributed Web Resources for Efficient and Effective Discovery of Information on the Web

National Portal of India [1] integrates information from distributed web resources like websites, portals of different Ministries, Departments, State Governments as well as district administrations. These websites are developed at different points of time, using different standards and technologies. Thus integrating information from the distributed, disparate web resources is a challenging task and also has a reflection on the information discovery by a citizen using a unified interface such as National Portal. The existing text based search engines would also not yield desired results [7].

Couple of approaches was deliberated to address the above challenge and it was concluded that a metadata replication based approach would be most feasible and sustainable. Accordingly solution was designed for replication of metadata from distributed repositories using services oriented architecture. Uniform Metadata specifications were devised based on Dublin core standard [9]. To begin with solution is being implemented among National Portal and 35 State Portals spread across length and breadth of India. Metadata from distributed repositories is replicated to a central repository regardless of the platform and technology used by distributed repositories. Simple Search Interface has also been developed for efficient and effective information discovery by the citizens.

Neeta Verma, Pechimuthu Thangamuthu, Alka Mishra

Translation from UML to Markov Model: A Performance Modeling Framework

Performance engineering focuses on the quantitative investigation of the behavior of a system during the early phase of the system development life cycle. Bearing this on mind, we delineate a performance modeling framework of the application for communication system that proposes a translation process from high level UML notation to Continuous Time Markov Chain model (CTMC) and solves the model for relevant performance metrics. The framework utilizes UML collaborations, activity diagrams and deployment diagrams to be used for generating performance model for a communication system. The system dynamics will be captured by UML collaboration and activity diagram as reusable specification building blocks, while deployment diagram highlights the components of the system. The collaboration and activity show how reusable building blocks in the form of collaboration can compose together the service components through input and output pin by highlighting the behavior of the components and later a mapping between collaboration and system component identified by deployment diagram will be delineated. Moreover the UML models are annotated to associate performance related quality of service (QoS) information which is necessary for solving the performance model for relevant performance metrics through our proposed framework. The applicability of our proposed performance modeling framework in performance evaluation is delineated in the context of modeling a communication system.

Razib Hayat Khan, Poul E. Heegaard

A Comparative Study of Protein Sequence Clustering Algorithms

In this paper, we survey four clustering techniques and discuss their advantages and drawbacks. A review of eight different protein sequence clustering algorithms has been accomplished. Moreover, a comparison between the algorithms on the basis of some factors has been presented.

A. Sharaf Eldin, S. AbdelGaber, T. Soliman, S. Kassim, A. Abdo

OpenGL in Multi-User Web-Based Applications

In this article construction and potential of OpenGL multi-user web-based application are presented. The most common technologies like: .NET ASP, Java and Mono were used with specific OpenGL libraries to visualize tree-dimensional medical data. The most important conclusion of this work is that server side applications can easily take advantage of fast GPU and produce efficient results of advanced computation just like the visualization.

K. Szostek, A. Piórkowski

Testing Task Schedulers on Linux System

Testing task schedulers on Linux operating system proves to be a challenging task. There are two main problems. The first one is to identify which properties of the scheduler to test. The second problem is how to perform it, e.g., which API to use that is sufficiently precise and in the same time supported on most platforms. This paper discusses the problems in realizing test framework for testing task schedulers and presents one potential solution. Observed behavior of the scheduler is the one used for “normal” task scheduling (SCHED_OTHER), unlike one used for real-time tasks (SCHED_FIFO, SCHED_RR).

Leonardo Jelenković, Stjepan Groš, Domagoj Jakobović

Automatic Computer Overhead Line Design

Approach to design of outer electric lines has changed in last years very significantly. Especially new demands in branch reliability should designer keep in mind. These new requests are basis of new European and national standards. To simplify design layout, automate verification of all rules and limitations and minimize mistakes computer application was developed to solve these tasks. This article describes new approach to this task, features and possibilities of this software tool.

Lucie Noháčová, Karel Noháč

Building Test Cases through Model Driven Engineering

Recently, Model Driven Engineering (MDE) has been proposed to face the complexity in the development, maintenance and evolution of large and distributed software systems. Model Driven Architecture (MDA) is an example of MDE. In this context, model transformations enable a large reuse of software systems through the transformation of a Platform Independent Model into a Platform Specific Model. Although source code can be generated from models, defects can be injected during the modeling or transformation process. In order to delivery software systems without defects that cause errors and fails, the source code must be submitted to test. In this paper, we present an approach that takes care of test in the whole software life cycle, i.e. it starts in the modeling level and finishes in the test of source code of software systems. We provide an example to illustrate our approach.

Helaine Sousa, Denivaldo Lopes, Zair Abdelouahab, Slimane Hammoudi, Daniela Barreiro Claro

The Effects of Educational Multimedia for Scientific Signs in the Holy Quran in Improving the Creative Thinking Skills for Deaf Children

This paper investigates the role of the scientific signs in the holy Quran in improving the creative thinking skills for the deaf children using multimedia. The paper investigates if the performance made by the experimental group’s individuals is statistically significant compared with the performance made by the control group’s individuals on Torrance Test for creative thinking (fluency, flexibility, originality and the total degree) in two cases:

1. Without considering the gender of the population.

2. Considering the gender of the population.

Sumaya Abusaleh, Eman AbdelFattah, Zain Alabadi, Ahmad Sharieh

Parallelization of Shape Function Generation for Hierarchical Tetrahedral Elements

Research has gone into parallelization of the numerical aspects of computationally intense analysis and solutions. Recent advances in computer algebra systems have opened up new opportunities for research: generating closed-form, symbolic solutions more efficiently by parallelizing the symbolic manipulations.

Sara E. McCaslin

Analysis of Moment Invariants on Image Scaling and Rotation

Moment invariants have been widely applied to image pattern recognition in a variety of applications due to its invariant features on image translation, scaling and rotation. The invariant properties are strictly invariant for the continuous function. Normally, images in practical applications are discrete. Consequently, the moment invariants may change over image geometric transformation. To address this research problem, an analysis with respect to the variation of moment invariants on image geometric transformation is presented in this paper, i.e., scaling and rotation. The guidance is also provided for minimizing the fluctuation of moment invariants.

Dongguang Li

A Novel Binarization Algorithm for Ballistics Firearm Identification

The identification of ballistics specimens from imaging systems is of paramount importance in criminal investigation. Binarization plays a key role in preprocess of recognizing cartridges in the ballistic imaging systems. Unfortunately, it is very difficult to get the satisfactory binary image using existing binary algorithms. In this paper, we utilize the global and local thresholds to enhance the image binarization. Importantly, we present a novel criterion for effectively detecting edges in the images. Comprehensive experiments have been conducted over sample ballistic images. The empirical results demonstrate the proposed method can provide a better solution than existing binary algorithms.

Dongguang Li

A Schema Classification Scheme for Multilevel Databases

Multilevel secure (MLS) database models provide a data protection mechanism different from traditional data access control. The MLS database has been used in various application domains including government, hospital, military, etc. The MLS database model protects data by grouping them into different classification and creates different views to the users of different clearance levels. Previous models have focused on data level classification like tuples and elements. In this study, we introduce a schema level classification mechanism, i.e. attribute and relation classification. We first define the basic model, and then give definitions of integration properties and operations of database. The schema classification scheme will reduce semantics inferences and thus prevent users from compromising the database.

Tzong-An Su, Hong-Ju Lu

Memory Leak Sabotages System Performance

This refers to the inability of a program to release the memory-or part it-that it has accessed to perform certain task(s) in computer systems [1]. The unintended consequences of such behavior are manifested in forms of diminishing performance at best. In worse case scenarios, memory leaks could lead the computer system to freeze and/or complete application failure. Memory leaks are particularly disastrous in limited memory embedded systems and client-server environments where applications share memories across multiple-user platforms. It is up to operating system designers to make sure that the currently running applications release memory after program termination. This work accesses and quantifies the impact of memory leak in system performance.

Nagm Mohamed

Writer Identification Using Inexpensive Signal Processing Techniques

We propose to use novel and classical audio and text signal-processing and otherwise techniques for “inexpensive” fast writer identification tasks of scanned hand-written documents “visually”. The “inexpensive” refers to the efficiency of the identification process in terms of CPU cycles while preserving decent accuracy for preliminary identification. This is a comparative study of multiple algorithm combinations in a pattern recognition pipeline implemented in Java around an open-source Modular Audio Recognition Framework (MARF) that can do a lot more beyond audio. We present our preliminary experimental findings in such an identification task. We simulate “visual” identification by “looking” at the hand-written document as a whole rather than trying to extract fine-grained features out of it prior classification.

Serguei A. Mokhov, Miao Song, Ching Y. Suen

Software Artifacts Extraction for Program Comprehension

The maintenance of legacy software applications is a complex, expensive, quiet challenging, time consuming and daunting task due to program comprehension difficulties. The first step for software maintenance is to understand the existing software and to extract the high level abstractions from the source code. A number of methods, techniques and tools are applied to understand the legacy code. Each technique supports the particular legacy applications with automated/semi-automated tool support keeping in view the requirements of the maintainer. Most of the techniques support the modern languages but lacks support for older technologies. This paper presents a lightweight methodology for extraction of different artifacts from legacy COBOL and other applications

Ghulam Rasool, Ilka Philippow

Model-Driven Engineering Support for Building C# Applications

Realization of Model-Driven Engineering (MDE) vision of software development requires a comprehensive and user-friendly tool support. This paper presents a UML-based approach for building trustful C# applications. UML models are refined using profiles for assigning class model elements to C# concepts and to elements of implementation project. Stereotyped elements are verified on life and during model to code transformation in order to prevent creation of an incorrect code. The Transform OCL Fragments into C# system (T.O.F.I.C.) was created as a feature of the Eclipse environment. The system extends the IBM Rational Software Architect tool.

Anna Derezińska, Przemysław Ołtarzewski

Early Abnormal Overload Detection and the Solution on Content Delivery Network

From articles of H. Yu Chen about early detection of network attacks [1], the authors applied his approach to Early Abnormal Overload Detection (EAOD) on Content Delivery Network (CDN) and suggested solutions for the problem, to limit abnormal overload to be occurred on a large network, ensuring users always being accessed to desired resources. Early overload detection mechanism are based on three levels: at each router, in each autonomous system domain (AS domain) and on inter-autonomous domains (inter-AS domains). At each router, when abnormal load exceeds a threshold, it will notify to a server that contains the Change Aggregation Tree (CAT) in the autonomous domain. Each node of the tree is an overloaded router above. On inter-AS domains, the CAT servers exchange information with each other to create global CAT. Based on the height and shape (dense) of the global CAT tree, the overload can be determined on which destination network and which user network caused this overload. Next, the administrator decided to move the content (as a service) which causes overload, to a user network. By this way, it prevents overload on intermediate and destination networks. This approach asks the cooperation among network providers.

Cam Nguyen Tan, Son Dang Truong, Tan Cao Dang

ECG Feature Extraction using Time Frequency Analysis

The proposed algorithm is a novel method for the feature extraction of ECG beats based on Wavelet Transforms. A combination of two well-accepted methods, Pan Tompkins algorithm and Wavelet decomposition, this system is implemented with the help of MATLAB. The focus of this work is to implement the algorithm, which can extract the features of ECG beats with high accuracy. The performance of this system is evaluated in a pilot study using the MIT-BIH Arrhythmia database.

Mahesh A Nair

Optimal Component Selection for Component-Based Systems

In Component-based Software (CBS) development, it is desirable to choose software components that provide all necessary functionalities and at the same time optimize certain nonfunctional attributes of the system (for example, system cost). In this paper we investigate the problem of selecting software components to optimize one or more nonfunctional attributes of a CBS. We approach the problem through the lexicographic multi-objective optimization perspective and develop a scheme that produces Pareto-optimal solutions. Furthermore we show that the Component Selection Problem (CSP) can be solved in polynomial time if the components are connected by serial interfaces and all the objectives are to be minimized, whereas the corresponding maximization problem is NP-hard.

Muhammad Ali Khan, Sajjad Mahmood

Domain-based Teaching Strategy for Intelligent Tutoring System Based on Generic Rules

In this paper we present a framework for selecting the proper instructional strategy for a given teaching material based on its attributes. The new approach is based on a flexible design by means of generic rules. The framework was adapted in an Intelligent Tutoring System to teach Modern Standard Arabic language to adult English-speaking learners with no pre-knowledge of Arabic language is required.

Dawod Kseibat, Ali Mansour, Osei Adjei, Paul Phillips

Parallelization of Edge Detection Algorithm using MPI on Beowulf Cluster

In this paper, we present the design of parallel Sobel edge detection algorithm using Foster’s methodology. The parallel algorithm is implemented using MPI message passing library and master/slave algorithm. Every processor performs the same sequential algorithm but on different part of the image. Experimental results conducted on Beowulf cluster are presented to demonstrate the performance of the parallel algorithm.

Nazleeni Haron, Ruzaini Amir, Izzatdin A. Aziz, Low Tan Jung, Siti Rohkmah Shukri

Teaching Physical Based Animation via OpenGL Slides

This work expands further our earlier poster presentation and integration of the OpenGL Slides Framework (OGLSF) – to make presentations with real-time animated graphics where each slide is a scene with tidgets – and physical based animation of elastic two-, three-layer softbody objects. The whole project is very interactive, and serves dual purpose – delivering the teaching material in a classroom setting with real running animated examples as well as releasing the source code to the students to show how the actual working things are made.

Miao Song, Serguei A. Mokhov, Peter Grogono

Appraising the Corporate Sustainability Reports – Text Mining and Multi-Discriminatory Analysis

The voluntary disclosure of the sustainability reports by the companies attracts wider stakeholder groups. Diversity in these reports poses challenge to the users of information and regulators. This study appraises the corporate sustainability reports as per GRI (Global Reporting Initiative) guidelines (the most widely accepted and used) across all industrial sectors. Text mining is adopted to carry out the initial analysis with a large sample size of 2650 reports. Statistical analyses were performed for further investigation. The results indicate that the disclosures made by the companies differ across the industrial sectors. Multivariate Discriminant Analysis (MDA) shows that the environmental variable is a greater significant contributing factor towards explanation of sustainability report.

J. R. Modapothala, B. Issac, E. Jayamani

A Proposed Treatment for Visual Field Loss caused by Traumatic Brain Injury using Interactive Visuotactile Virtual Environment

In this paper, we propose a novel approach of using interactive virtual environment technology in Vision Restoration Therapy caused by Traumatic Brain Injury. We called the new system Interactive Visuotactile Virtual Environment and it holds a promise of expanding the scope of already existing rehabilitation techniques. Traditional vision rehabilitation methods are based on passive psychophysical training procedures, and can last up to six months before any modest improvements can be seen in patients. A highly immersive and interactive virtual environment will allow the patient to practice everyday activities such as object identification and object manipulation through the use 3D motion sensoring handheld devices such data glove or the Nintendo Wiimote. Employing both perceptual and action components in the training procedures holds the promise of more efficient sensorimotor rehabilitation. Increased stimulation of visual and sensorimotor areas of the brain should facilitate a comprehensive recovery of visuomotor function by exploiting the plasticity of the central nervous system. Integrated with a motion tracking system and an eye tracking device, the interactive virtual environment allows for the creation and manipulation of a wide variety of stimuli, as well as real-time recording of hand-, eye- and body movements and coordination. The goal of the project is to design a cost-effective and efficient vision restoration system.

Attila J. Farkas, Alen Hajnal, Mohd F. Shiratuddin, Gabriella Szatmary

Adaptive Collocation Methods for the Solution of Partial Differential Equations

An integration algorithm that conjugates a Method of Lines (MOL) strategy based on finite differences space discretizations, with a collocation strategy based on increasing level dyadic grids is presented. It reveals potential either as a grid generation procedure and a Partial Differential Equation (PDE) integration scheme. It copes satisfactorily with a example characterized by a steep travelling wave and a example that presented a forming steep shock, which demonstrates its versatility in dealing with different types of steep moving front problems, exhibiting features like advection-diffusion, widely common in the standard Chemical Processes simulation models

Paulo Brito, António Portugal

Educational Virtual Reality through a Multiview Autostereoscopic 3D Display

Nowadays the virtual reality technology is a topic of present and is an interest for research and development. There are different devices with which the user may enter, observe and interact with the computer-simulated three-dimensional world. For this reason applications of virtual reality can be in different areas. But the topical task nowadays is applying this modern technology in education. This report presents the results from the investigations into Philips multiview autostereocopic 3D displays and with the purpose of creating virtual reality applications for education objectives. An approach for creation of 3D video applications for the displays discussed in this paper is also presented.

Emiliyan G. Petkov

An Approach for Developing Natural Language Interface to Databases Using Data Synonyms Tree and Syntax State Table

The basic idea addressed in this research is developing a generic, dynamic, and domain independent natural language interface to databases. The approach consists of two phases; configuration phase and operation phase. The former builds data synonyms tree based on the database being implemented. The idea behind this tree is matching the natural language words with database elements. The tree hierarchy contains the database tables, attributes, attribute descriptions, and all possible synonyms for each description. The latter phase contains a technique that implements syntax state table to extract the SQL components from the natural language user request. As a result the corresponding SQL statement is generated without interference of human experts.

Safwan shatnawi, Rajeh Khamis

Analysis of Strategic Maps for a Company in the Software Development Sector

The present work develops the analysis of two strategic maps. One is based on the principles of Compensatory Fuzzy Logic (CFL) and the other studies Organizational Culture. The research is applied with a quali-quantitative approach and it studies the case of a software development company with the use of a technical procedure and a documentary base with the application of interviews and questionnaires. It concludes that the strategic maps based on and Organizational Culture are robust methodologies that identify and prioritize strategic variables. There is also an interrelationship amongst them in their consideration of important behavioral aspects. With this it was possible to analyze strategic aspects of the companies in a more complex and realistic way.

Marisa de Camargo Silveira, Brandon Link, Silvio Johann, Adolfo Alberto Vanti, Rafael Espin Andrade

The RDF Generator (RDFG) - First Unit in the Semantic Web Framework (SWF)

The Resources Description Framework RDF Generator (RDFG) is a platform generates the RDF documents from any web page using predefined models for each internet domain, using special web pages classification system. The RDFG is one of the SWF units aimed to standardize the researchers efforts in Semantic Web, by classifying the internet sites to domains, and preparing special RDF model for each domain. RDFG used web intelligent methods for preparing RDF documents such as ontology based semantics matching system to detect the type of web page and knowledgebase machine learning system to create the RDF documents accurately and according to the standard models. RDFG reduce the complexity of the RDF modeling and realize the web entities creating, sharing and reusing.

Ahmed Nada, Badie Sartawi

Information Technology to Help Drive Business Innovation and Growth

This paper outlines how information technology (IT) can help to drive business innovation and growth. Today innovation is a key to properly managing business growth from all angles. IT governance is responsible for managing and aligning IT with the business objectives; managing strategic demand through the projects portfolio or managing operational demand through the services portfolio. IT portfolios offer the possibility of finding new opportunities to make changes and improve through innovation, enabling savings in capital expenditure and the company’s IT operations staff time.

In the last century, IT was considered as a new source of infinite possibilities and business success through innovation.

Igor Aguilar Alonso, José Carrillo Verdún, Edmundo Tovar Caro

A Framework for Enterprise Operating Systems Based on Zachman Framework

Nowadays, the Operating System (OS) isn’t only the software that runs your computer. In the typical information-driven organization, the operating system is part of a much larger platform for applications and data that extends across the LAN, WAN and Internet. An OS cannot be an island unto itself; it must work with the rest of the enterprise. Enterprise wide applications require an Enterprise Operating System (EOS). Enterprise operating systems used in an enterprise have brought about an inevitable tendency to lunge towards organizing their information activities in a comprehensive way. In this respect, Enterprise Architecture (EA) has proven to be the leading option for development and maintenance of enterprise operating systems. EA clearly provides a thorough outline of the whole information system comprising an enterprise. To establish such an outline, a logical framework needs to be laid upon the entire information system. Zachman Framework (ZF) has been widely accepted as a standard scheme for identifying and organizing descriptive representations that have prominent roles in enterprise-wide system development. In this paper, we propose a framework based on ZF for enterprise operating systems. The presented framework helps developers to design and justify completely integrated business, IT systems, and operating systems which results in improved project success rate.

S. Shervin Ostadzadeh, Amir Masoud Rahmani

A Model for Determining the Number of Negative Examples used in Training a MLP

In general, a MLP training uses a training set containing only positive examples, which may change the neural network into an over confident network for solving the problem. A simple solution for this problem is the introduction of negative examples in the training set. Through this procedure, the network will be prepared for the cases it has not been trained for. Unfortunately, up to the present, the number of negative examples that must be used in the training process was not mentioned in the literature. Consequently, the present article aims at finding a general mathematical pattern for training a MLP with negative examples. With that end in view, we have used a regressive analytic technique in order to analyze the data resulted from training three neural networks for a number of three datasets: a dataset for letter recognition, one for the data supplied by a sonar and a last one for the data resulted from the medical tests for determining diabetes. The pattern testing was performed on a new database for confirming its truthfulness.

Cosmin Cernazanu-Glavan, Stefan Holban

GPU Benchmarks Based On Strange Attractors

The main purpose of presented GPU benchmark is to generate complex polygonal mesh structures based on strange attractors with fractal structure. Attractors have to be created as 4D objects using quaternion algebra. Polygonal mesh can have different numbers of polygons because of iterative application of this system. The major complexity of every mesh would provide efficient results using multiple methods such as ray-tracing, anti aliasing and anisotropic filtering to evaluate GPU performance. Our main goal is to develop new faster algorithm to generate 3D structures and apply its complexity for GPU benchmarking.

Tomáš Podoba, Karel Vlček, Jiří Giesl

Effect of Gender and Sound Spatialization on Speech Intelligibility in Multiple Speaker Environment

In multiple speaker environments such as teleconferences we observe a loss of intelligibility, particularly if the sound is monaural in nature. In this study, we exploit the "Cocktail Party Effect", where a person can isolate one sound above all others using sound localization and gender cues. To improve clarity of speech, each speaker is assigned a direction using Head Related Transfer Functions (HRTFs) which creates an auditory map of multiple conversations. A mixture of male and female voices is used to improve comprehension.

We see 6% improvement in cognition while using a male voice in a female dominated environment and 16% improvement in the reverse case. An improvement of 41% is observed while using sound localization with varying elevations. Finally, the improvement in cognition jumps to 71% when both elevations and azimuths are varied. Compared to our previous study, where only azimuths were used, we observe that combining both the azimuths and elevations gives us better results (57% vs. 71%).

M. Joshi, M. Iyer, N. Gupta, A. Barreto

Modeling Tourism Sustainable Development

The basic approaches to decision making and modeling tourism sustainable development are reviewed. Dynamics of a sustainable development is considered in the Forrester’s system dynamics. Multidimensionality of tourism sustainable development and multicriteria issues of sustainable development are analyzed. Decision Support Systems (DSS) and Spatial Decision Support Systems (SDSS) as an effective technique in examining and visualizing impacts of policies, sustainable tourism development strategies within an integrated and dynamic framework are discussed. Main modules that may be utilized for integrated modeling sustainable tourism development are proposed.

O. A. Shcherbina, E. A. Shembeleva

PI-ping - Benchmark Tool for Testing Latencies and Throughput in Operating Systems

In this paper we present a benchmark tool called PI-ping that can be used to compare real-time performance of operating systems. It uses two types of processes that are common in operating systems – interactive tasks demanding low latencies and also processes demanding high CPU utilization. Most operating systems have to perform well in both conditions and the goal is to achieve the highest throughput when keeping the latencies within a reasonable interval. PI-ping measures the latencies of an interactive process when the system is under heavy computational load. Using PI-ping benchmark tool we are able to compare different operating systems and we attest the functionality of it using two very common operating systems - Linux and FreeBSD.

J. Abaffy, T. Krajčovič

Towards Archetypes-Based Software Development

We present a framework for the archetypes based engineering of domains, requirements and software (Archetypes-Based Software Development, ABD). An archetype is defined as a primordial object that occurs consistently and universally in business domains and in business software systems. An archetype pattern is a collaboration of archetypes. Archetypes and archetype patterns are used to capture conceptual information into domain specific models that are utilized by ABD. The focus of ABD is on software factories - family-based development artefacts (domain specific languages, patterns, frameworks, tools, micro processes, and others) that can be used to build the family members. We demonstrate the usage of ABD for developing laboratory information management system (LIMS) software for the Clinical and Biomedical Proteomics Group, at the Leeds Institute of Molecular Medicine, University of Leeds.

Gunnar Piho, Mart Roost, David Perkins, Jaak Tepandi

Dependability Aspects Regarding the Cache Level of a Memory Hierarchy using Hamming Codes

In this paper we will apply a SEC-DED code to the cache level of a memory hierarchy. From the category of SEC-DED (Single Error Correction Double Error Detection) codes we select the Hamming code. For correction of single-bit error we use a syndrome decoder, a syndrome generator and the check bits generator circuit.

O. Novac, St. Vari-Kakas, Mihaela Novac, Ecaterina Vladu, Liliana Indrie

Performance Evaluation of an Intelligent Agents Based Model within Irregular WSN Topologies

There are many approaches proposed by the scientific community for the implementation and development of Wireless Sensor Networks (WSN). These approaches correspond to different areas of science, such as Electronics, Communications, Computing, Ubiquity, and Quality of Service among others. However, all are subject to the same constraints, because of the nature of WSN devices. The most common constraints of a WSN are the energy consumption, the network nodes organization, the sensor network’s task reprogramming, the reliability in the data transmission, the resource optimization (memory and processing), etc. In the Artificial Intelligence Area is has proposed an Distributed System Approach with Mobile Intelligent Agents. An Integration Model of Mobile Intelligent Agents within Wireless Sensor Network solves some of the constraints presented above on WSN´s topologies. However, the model only was tested on the square topologies. In this way, the aim of this paper is to evaluate the performance of this model in irregular topologies.

Alberto Piedrahita Ospina, Alcides Montoya Cañola, Demetrio Ovalle Carranza

Double Stage Heat Transformer Controlled by Flow Ratio

this paper shows the values of Flow ratio (FR) for control of an absorption double stage heat transformer. The main parameters for the heat pump system are defined as COP, FR and GTL. The control of the entire system is based in a new definition of FR. The heat balance of the Double Stage Heat Transformer (DSHT) is used for the control. The mass flow is calculated for a HPVEE program and a second program control the mass flow. The mass flow is controlled by gear pumps connected to LabView program. The results show an increment in the fraction of the recovery energy. An example of oil distillation is used for the calculation. The waste heat energy is added at the system at 70 °C. Water ™ - Carrol mixture is used in the DSHT. The recover energy is obtained in a second absorber at 128 °C with two scenarios.

S. Silva-Sotelo, R. J. Romero, A. Rodríguez – Martínez

Enforcement of Privacy Policies over Multiple Online Social Networks for Collaborative Activities

Our goal is to tend to develop an enforcement architecture of privacy policies over multiple online social networks. It is used to solve the problem of privacy protection when several social networks build permanent or temporary collaboration. Theoretically, this idea is practical, especially due to more and more social network tend to support open source framework “OpenSocial”. But as we known different social network websites may have the same privacy policy settings based on different enforcement mechanisms, this would cause problems. In this case, we have to manually write code for both sides to make the privacy policy settings enforceable. We can imagine that, this is a huge workload based on the huge number of current social networks. So we focus on proposing a middleware which is used to automatically generate privacy protection component for permanent integration or temporary interaction of social networks. This middleware provide functions, such as collecting of privacy policy of each participant in the new collaboration, generating a standard policy model for each participant and mapping all those standard policy to different enforcement mechanisms of those participants.

Zhengping Wu, Lifeng Wang

An Estimation of Distribution Algorithms Applied to Sequence Pattern Mining

This paper presents a proposal of distribution’s estimated algorithm to the extraction of sequential patterns in a database which use a probabilistic model based on graphs which represent the relations among items that form a sequence. The model maps a probability among the items allowing them to adjust the model during the execution of the algorithm using the evolution process of EDA, optimizing the candidate’s generation of solution and extracting a group of sequential patterns optimized.

Paulo Igor A. Godinho, Aruanda S. Gonçalves Meiguins, Roberto C. Limão de Oliveira, Bianchi S. Meiguins

Tlatoa Communicator

A Framework to Create Task-Independent Conversational Systems

This paper presents an approach to simplify the creation of spoken dialogue systems for information retrieval, by separating all the task dependent information, such as vocabulary, relationships, restrictions, dialogue structure, the minimal set of elements to construct a query, and all the information required by the Dialogue Manager (DM), Automatic Speech Recognition (ASR), Natural Language Processing (NLP), Natural Language Generation (NLG) and the application. The task related information is kept in XML files, which can be read by the different modules, depending on the kind of data each one requires to function, providing task independence and making the design of spoken dialogue systems much easier.

D. Pérez, I. Kirschning

Using Multiple Datasets in Information Visualization Tool

The goal of this paper is to present a information visualization application capable of opening and synchronizing information between two or more datasets. We have chosen this approach to address some of the limitations of various applications. Also, the application uses multiple coordinated views and multiple simultaneous datasets. We highlight the application layout configuration by the user, including the flexibility to specify the number of data views and to associate different datasets for each visualization techniques.

Rodrigo Augusto de Moraes Lourenço, Rafael Veras Guimarães, Nikolas Jorge S. Carneiro, Aruanda Simões Gonçalves Meiguins, Bianchi Serique Meiguins

Improved Crack Type Classification Neural Network based on Square Sub-images of Pavement Surface

The previous neural network based on the proximity values was developed using rectangular pavement images. However, the proximity value derived from the rectangular image was biased towards transverse cracking. By sectioning the rectangular image into a set of square sub-images, the neural network based on the proximity value became more robust and consistent in determining a crack type. This paper presents an improved neural network to determine a crack type from a pavement surface image based on square sub-images over the neural network trained using rectangular pavement images. The advantage of using square sub-image is demonstrated by using sample images of transverse cracking, longitudinal cracking and alligator cracking.

Byoung Jik Lee, Hosin “David” Lee

Building Information Modeling as a Tool for the Design of Airports

Building Information Modeling (BIM) may have obvious implications in the process of architectural design and construction at the present stage of technological development. However, BIM has rarely been really assessed and its benefits are often described in generic terms. In this paper we describe an experiment in which some benefits are identified from a comparison between two design processes of the same airport building, one run in an older CAD system and the other in a BIM based approach. The practical advantages of BIM to airport design are considerable.

Júlio Tollendal Gomes Ribeiro, Neander Furtado Silva, Ecilamar Maciel Lima

A Petri-Nets Based Unified Modeling Approach for Zachman Framework Cells

With a trend toward becoming more and more information based, enterprises constantly attempt to surpass the accomplishments of each other by improving their information activities. In this respect, Enterprise Architecture (EA) has proven to serve as a fundamental concept to accomplish this goal. Enterprise architecture clearly provides a thorough outline of the whole enterprise applications and systems with their relationships to enterprise business goals. To establish such an outline, a logical framework needs to be laid upon the entire information system called Enterprise Architecture Framework (EAF). Among various proposed EAF, Zachman Framework (ZF) has been widely accepted as a standard scheme for identifying and organizing descriptive representations that have critical roles in enterprise management and development. One of the problems faced in using ZF is the lack of formal and verifiable models for its cells. In this paper, we proposed a formal language based on Petri nets in order to obtain verifiable models for all cells in ZF. The presented method helps developers to validate and verify completely integrated business and IT systems which results in improve the effectiveness or efficiency of the enterprise itself.

S. Shervin Ostadzadeh, Mohammad Ali Nekoui

From Perspectiva Artificialis to Cyberspace: Game-Engine and the Interactive Visualization of Natural Light in the Interior of the Building

In order to support the early stages of conceptual design, the architect used throughout the years, mockups – scaled physical models - or perspective drawings that intended to predict architectural ambience before its effective construction. This paper studies the real time interactive visualization, focused on one of the most important aspects inside building space: the natural light. However, the majority of physically-based algorithms currently existing was designed for the synthesis of static images which may not take into account how to rebuild the scene - in real time - when the user is doing experiments to change certain properties of design. In this paper we show a possible solution for this problem.

Evangelos Dimitrios Christakou, Neander Furtado Silva, Ecilamar Maciel Lima

Computational Shape Grammars and Non-Standardization: a Case Study on the City of Music of Rio de Janeiro

This paper shows how shape grammars can be applied in the analyze of new types of architecture through the case study of the project of the City of Music of Rio de Janeiro by Christian Portzamparc. It aims to indicate how shape grammars can still be constructed from designs which were created with the purpose of avoiding standardization.

Félix A. Silva Júnior, Neander Furtado Silva

Architecture Models and Data Flows in Local and Group Datawarehouses

Architecture models and possible data flows for local and group datawarehouses are presented, together with some data processing models. The architecture models consists of several layers and the data flow between them. The choosen architecture of a datawarehouse depends on the data type and volumes from the source data, and inflences the analysis, data mining and reports done upon the data from DWH.

R.M. Bogza, Dorin Zaharie, Silvia Avasilcai, Laura Bacali


Weitere Informationen

Premium Partner