Skip to main content
Top

2010 | Book

Networked Digital Technologies

Second International Conference, NDT 2010, Prague, Czech Republic, July 7-9, 2010. Proceedings, Part I

Editors: Filip Zavoral, Jakub Yaghob, Pit Pichappan, Eyas El-Qawasmeh

Publisher: Springer Berlin Heidelberg

Book Series : Communications in Computer and Information Science

insite
SEARCH

About this book

On behalf of the NDT 2010 conference, the Program Committee and Charles University in Prague, Czech Republic, we welcome you to the proceedings of the Second International Conference on ‘Networked Digital Technologies’ (NDT 2010). The NDT 2010 conference explored new advances in digital and Web technology applications. It brought together researchers from various areas of computer and information sciences who addressed both theoretical and applied aspects of Web technology and Internet applications. We hope that the discussions and exchange of ideas that took place will contribute to advancements in the technology in the near future. The conference received 216 papers, out of which 85 were accepted, resulting in an acceptance rate of 39%. These accepted papers are authored by researchers from 34 countries covering many significant areas of Web applications. Each paper was evaluated by a minimum of two reviewers. Finally, we believe that the proceedings document the best research in the studied areas. We express our thanks to the Charles University in Prague, Springer, the authors and the organizers of the conference.

Table of Contents

Frontmatter

Information and Data Management

A New Approach for Fingerprint Matching Using Logic Synthesis

In this study, a new approach based on logic synthesis to match feature points from fingerprint images is developed and introduced. For fingerprint matching we propose logic minimization method to make matching process easy and fast. We think that use logic minimization method might be used as a reliable in fingerprint recognition.

Fatih Başçiftçi, Celal Karaca
Extracting Fuzzy Rules to Classify Motor Imagery Based on a Neural Network with Weighted Fuzzy Membership Functions

This paper presents a methodology to classify motor imagery by extracting fuzzy rules based on the neural network with weighted fuzzy membership functions (NEWFM) and twenty-four numbers of input features that are extracted by wavelet-based features. This paper consists of three steps to classify motor imagery. In the first step, wavelet transform is performed to filter noises from signals. In the second step, twenty-four numbers of input features are extracted by wavelet-based features from filtered signals by wavelet transform. In the final step, NEWFM classifies motor imagery using twenty-four numbers of input features that are extracted in the second step. In this paper, twenty-four numbers of input features are selected for generating the fuzzy rules to classify motor imagery. NEWFM is tested on the Graz BCI datasets that were used in the BCI Competitions of 2003. The accuracy of NEWFM is 83.51%.

Sang-Hong Lee, Joon S. Lim, Dong-Kun Shin
Distributed Data-Mining in the LISp-Miner System Using Techila Grid

Distributed data-mining opens new possibilities for answering ever-complex analytical questions that the owners of databases want to ask. Even highly optimized data-mining algorithms with many decades of research have their limits and growing complexity of tasks exceeds computing power of a single PC. It starts to be difficult to get results in acceptable time for an interactive work. Moreover, new goals of research require to run iteratively many tasks to automate the whole KDD process. Need is therefore to speed-up maximally solving of every single task. This article describes newly implemented algorithm to divide the whole task into sub-tasks solved in parallel on grid nodes. There are presented the first results and further possible improvements.

Milan Šimůnek, Teppo Tammisto
Non-negative Matrix Factorization on GPU

Today, the need of large data collection processing increase. Such type of data can has very large dimension and hidden relationships. Analyzing this type of data leads to many errors and noise, therefore, dimension reduction techniques are applied. Many techniques of reduction were developed, e.g. SVD, SDD, PCA, ICA and NMF. Non-negative matrix factorization (NMF) has main advantage in processing of non-negative values which are easily interpretable as images, but other applications can be found in different areas as well. Both, data analysis and dimension reduction methods, need a lot of computation power. In these days, many algorithms are rewritten with the GPU utilization, because GPU brings massive parallel architecture and very good ratio between performance and price. This paper introduce computation of NMF on GPU using CUDA technology.

Jan Platoš, Petr Gajdoš, Pavel Krömer, Václav Snášel
Chatbot Enhanced Algorithms: A Case Study on Implementation in Bahasa Malaysia Human Language

Chatbot is one of a technology that tried to encounter the question that popped into computer science field in 1950 which is “Can machines think?” [6]. Proposed by mathematician Alan Turing, the question later becomes the pinnacle reference for researchers in artificial intelligence discipline. Turing later also introduces “The Imitation Game” that now known as “Turing Test” where the idea of the test is to examine whether machine can fool a judge into thinking that they are having a conversation with an actual human. The technology back then was great but in rapid evolution of computer science, it can become even better. Evolution is computer scripting language, application design model, and so on, clearly have its advantage towards enabling more complex features in developing a computer program. In this paper, we propose an enhanced algorithm of a chatbot by taking advantages of relational database model to design the whole chatbot architecture that enable several features that cannot or difficult to be done in previous state of computer science programming technique. Started with some literature of a previous developed chatbot, then a detailed description of each new enhanced algorithm together with testing and results from the implementation of these new algorithms that can be used in development of a modern chatbot. These several new algorithms will enable features that will extend chatbot capabilities in responding to the conversation. These algorithm is actually implemented in design and development of chatbot that specifically deal with Bahasa Malaysia language, but taking to account that language in chatbot is really about the data in chatbot knowledge-based, the algorithm is seems transferable wherever it fits into another human language.

Abbas Saliimi Lokman, Jasni Mohamad Zain
Handwritten Digits Recognition Based on Swarm Optimization Methods

In this paper, the problem of handwritten digits recognition is addressed using swarm based optimization methods. These latter have been shown to be useful for a wide range of applications such as functional optimization. The proposed work places specific swarm based optimization methods that are the particle swarm optimizer and variations of the bees’ colony optimization in handwritten Arabic numerals recognition so that to improve the generalization ability of a recognition system through the use of two alternatives. In the first one, swarm based methods have been used as statistical classifiers whereas in the second one a combination of the famous gradient descent back-propagation learning method and the bees algorithm has been proposed to allow better accuracy and speediness. Comparative study on a variety of handwritten digits has shown that high recognition rates (99.82%) have been obtained.

Salima Nebti, Abdellah Boukerram
A Framework of Dashboard System for Higher Education Using Graph-Based Visualization Technique

We propose a novel approach of knowledge visualization method by adopting graph-based visualization technique and incorporating Dashboard concept for higher education institutions. Two aspects are emphasized, knowledge visualization and human-machine interaction. The knowledge visualization helps users to analyze the comprehensive characteristics of the students, lecturers and subjects after the clustering process and the interaction enable domain knowledge transfer and the use of the human’s perceptual capabilities, thus increases the intelligence of the system. The knowledge visualization is enhanced through the dashboard concept where it provides significant patterns of knowledge on real-world and theoretical modeling which could be called wisdom. The framework consists of the dashboard model, system architecture and system prototype for higher education environment is presented in this paper.

Wan Maseri Binti Wan Mohd, Abdullah Embong, Jasni Mohd Zain
An Efficient Indexing and Compressing Scheme for XML Query Processing

Due to the wide-spread deployment of business-to-business (B2B) E-commerce, XML has become the standard format for data exchange over the Internet. How to process XML queries efficiently is an important research issue. Various indexing techniques have been proposed in the literature. However, they suffer from some of the following problems in various degrees. First, some indexing methods require huge size for index structures, which could be bigger than the original XML document in some cases. Second, some of them require long index construction time to minimize the size of index structures. Third, some of them can’t support complex queries efficiently. To overcome the aforementioned problems, we propose an indexing method called NCIM (Node Clustering Indexing Method). The experimental results show that NCIM can compress XML documents with high compression rate and low index construction time. It also supports complex queries efficiently.

I-En Liao, Wen-Chiao Hsu, Yu-Lin Chen
Development of a New Compression Scheme

Huffman coding is a simple lossless data compression technique that tries to take advantage of entropy by using a Variable-Length encoding to build a code table to encode a source file symbols. In this paper, we have re-visited Huffman coding and enhance it by considering the second order form of characters rather the first order form of characters. Results showed that using the second order form improves the compression ratio by around 8% than the existing Huffman coding.

Eyas El-Qawasmeh, Ahmed Mansour, Mohammad Al-Towiq
Compression of Layered Documents

Bitmaps documents are often a superposition of graphic and textual content together with pictures. Simard et al. in [1] showed that the compression performance on these documents could be improved when we separate images from text and we apply different compression algorithms on these different types of data. In this paper we study layered image compression via a new software tool.

Bruno Carpentieri
Classifier Hypothesis Generation Using Visual Analysis Methods

Classifiers can be used to automatically dispatch the abundance of newly created documents to recipients interested in particular topics. Identification of adequate training examples is essential for classification performance, but it may prove to be a challenging task in large document repositories. We propose a classifier hypothesis generation method relying on automated analysis and information visualisation. In our approach visualisations are used to explore the document sets and to inspect the results of machine learning methods, allowing the user to assess the classifier performance and adapt the classifier by gradually refining the training set.

Christin Seifert, Vedran Sabol, Michael Granitzer
Exploiting Punctuations along with Sliding Windows to Optimize STREAM Data Manager

Data stream processing has found enormous use in real time applications such as financial tickers, network and sensor monitoring, manufacturing processes and others. STREAM is Data Stream Management System which implements sliding window query model in its architecture. We explore the existing architectural model in an effort to achieve faster results and improve the overall system functionality. In this paper, we analyze the STREAM data manager for the possible optimization of the query processing. We discuss the potential inadequacies of the current query model and put forward the use of punctuations along with sliding windows to improve the memory utilization and query processing.

Lokesh Tiwari, Hamid Shahnasser
A Framework for In-House Prediction Markets

In-house prediction markets are a new method for collecting and aggregating information dispersed throughout an organization. This method is capable of accessing and aggregating certain organizational information that has previously not been attainable via traditional methods such as surveys, polls, group meetings, or suggestion boxes. Such information is often of great tactical and/or strategic value. Existing in-house prediction markets, which are either opened to all members of the organization or to pre-selected groups of experts within the organization, base participant’s power to influence the market strictly on the amount of their assets (usually in mock currency). We propose a more nuanced design approach that considers additional factors for determining the participant’s influence on the market over the long term. The goal of this design approach is to improve the accuracy and decision-support viability of in-house prediction markets.

Miguel Velacso, Nenad Jukic
Road Region Extraction Based on Motion Information and Seeded Region Growing for Foreground Detection

This paper proposes a road region extraction method based on the motion information of foreground objects and seeded region growing (SRG) algorithm. By learning on a training set of a scene over a period of time, we get the trajectory of moving object, then use SRG algorithm in which the trajectory is used as seed to extract road region. As a result, instead of detecting foreground objects in a conventional pixel by pixel manner, detection can be mainly performed on or near the pixels of road region so as to facilitate and accelerate foreground detection. In addition, the regions outside road region most of the time do not need to be transmitted in visual communication. Experimental results represent the accuracy and usefulness of our proposed method.

Hongwu Qin, Jasni Mohamad Zain, Xiuqin Ma, Tao Hai
Process Mining Approach to Promote Business Intelligence in Iranian Detectives’ Police

Most of business processes leave their “footprints” in transactional information systems, i.e., business process events are recorded in so-called event logs on Enterprise systems. These systems can be used as a lead in police investigation. This paper field is in providing techniques and tools for discovering process, control, data, organizational, and social structures from event logs as the goal of process mining and the basic idea of process mining is to diagnose business processes to promote Detectives‘ Police knowledge in computer crimes. In this paper we focus on the potential use of process mining techniques to enable Iranian Detectives‘ Police for discovering grounds of crime. This application is an approach that provides new view in police investigation. This paper explains process mining how can assist the monitoring enterprise systems.

Mehdi Ghazanfari, Mohammad Fathian, Mostafa Jafari, Saeed Rouhani
Copyright Protection of Relational Database Systems

Due to the increasing use of databases in many real-life applications, database watermarking has been suggested lately as a vital technique for copyright protection of databases. In this paper, we propose an efficient database watermarking algorithm based on inserting a binary image watermark in Arabic character attributes. Experimental results demonstrate the robustness of the proposed algorithm against common database attacks.

Ali Al-Haj, Ashraf Odeh, Shadi Masadeh
Resolving Semantic Interoperability Challenges in XML Schema Matching

XML (Extensible Markup Language) Schema is a text based description of XML document which provides valid format of XML dataset. Nowadays, XML has become the most popular used method in encoding structured data for information exchange. The layering technology in XML allows it to be used independently from standardized representation and interoperates disparate Information Systems (ISs) in exchanging valuable information. However, these distributed IS are always structured in multiple data formats and with different semantic meaning. The nature of heterogeneous and distributed in ISs had eventually risen up the problem of semantic conflicts. This paper proposed some rules which focus on: Word Similarity and Word Relatedness using WordNet that are not only reconcile semantic conflicts but also bring integrated access to XML data by mapping dispersed electronic data which is semantically matched. Even the proposed approach is rather generic; this paper focuses only on certain domain which is medical domain.

Chiw Yi Lee, Hamidah Ibrahim, Mohamed Othman, Razali Yaakob
Some Results in Bipolar-Valued Fuzzy BCK/BCI-Algebras

In this note, by using the concept of Bipolar-valued fuzzy set, the notion of bipolar-valued fuzzy

BCK

/

BCI

-algebra is introduced. Moreover, the notions of (strong) negative

s

-cut, (strong) positive

t

-cut are introduced and the relationship between these notions and crisp subalgebras are studied.

A. Borumand Saeid, M. Kuchaki Rafsanjani

Security

The Effect of Attentiveness on Information Security

This paper presents a brief overview of a larger study on the impact of attentiveness, in addition to other factors related to the human factors, on information security in both private and government organizations in Saudi Arabia. The aim of the initial experiment was to sense the existence of attentiveness in relation to information security; the results were encouraging enough to go for a larger scale experiment with additional human factors such as awareness, workload and passwords, etc. We believe this is to be the first study to investigate the effect of attentiveness as a part of the Saudi culture context on organizational information security.

Adeeb M. Alhomoud
A Secured Mobile Payment Model for Developing Markets

The evolution of mobile payments continues to take place at the great speed with changing business models and technology schemes. With the omnipresent availability of mobile phone and other mobile devices, mobile payment presents an investment opportunity to developing markets. However, the success of mobile payment depends on the security of the underlying technology and the associated business model. In this paper, a mobile payment model for developing market is discussed. This mobile payment model comprises the features that can accommodate secured local payments in remote locations with limited network coverage.

Bossi Masamila, Fredrick Mtenzi, Jafari Said, Rose Tinabo
Security Mapping to Enhance Matching Fine-Grained Security Policies

In the heterogeneous environment of Web service, it is common that the data processed by a service consumer and a service provider present several syntactic heterogeneities such as data name heterogeneity and data structure heterogeneity. However, current approaches of security policy (SP) matching don’t consider such heterogeneities that may exist between the protection scopes of fine-grained SPs. In this paper, we show how this can lead to wrong matching results and propose a security mapping approach to enhance the correctness of matching results when dealing with fine-grained SPs.

Monia Ben Brahim, Maher Ben Jemaa, Mohamed Jmaiel
Implementation and Evaluation of Fast Parallel Packet Filters on a Cell Processor

Packet filters are essential for most areas of recent information network technologies. While high-end expensive routers and firewalls are implemented in hardware-based, flexible and cost-effective ones are usually in software-based solutions using general-purpose CPUs but have less performance. The authors have studied the methods of applying code optimization techniques to the packet filters executing on a single core processor. In this paper, by utilizing the multi-core processor Cell Broadband Engine with software pipelining, we construct a parallelized and SIMDed packet filter 40 times faster than the naive C program filter executed on a single core.

Yoshiyuki Yamashita, Masato Tsuru
On the Algebraic Expression of the AES S-Box Like S-Boxes

In the literature, there are several proposed block ciphers like AES, Square, Shark and Hierocrypt which use S-boxes that are based on inversion mapping over a finite field. Because of the simple algebraic structure of S-boxes generated in this way, these ciphers usually use a bitwise affine transformation after the inversion mapping. In some ciphers like Camellia, an additional affine transformation is used before the input of the S-box as well. In this paper, we study algebraic expressions of S-boxes based on power mappings with the aid of finite field theory and show that the number of terms in the algebraic expression of an S-box based on power mappings changes according to the place an affine transformation is added. Moreover, a new method is presented to resolve the algebraic expression of the AES S-box like S-boxes according to the given three probable cases.

M. Tolga Sakallı, Bora Aslan, Ercan Buluş, Andac Şahin Mesut, Fatma Büyüksaraçoğlu, Osman Karaahmetoğlu
Student’s Polls for Teaching Quality Evaluation as an Electronic Voting System

The problems of electronic voting (e-voting) systems are commonly discussed in case of general election. The main problems of e-voting are related with the system security and the user’s anonymity. System security is the problem of cryptographic security, user’s authorization, limited access and protection against frauds. The anonymity is another important issue, because a guarantee that the voters are anonymous is reflected in the reliability of casted votes. Authorization and anonymity seems to be contradictory but it possible to separate both procedures. The problems of polls for teaching quality evaluation are similar. The polls need to be available only for authorized students but they also need to be filled in anonymously. Three solutions of such a problem in remote voting system are discussed in the document.

Marcin Kucharczyk
An Improved Estimation of the RSA Quantum Breaking Success Rate

The security of RSA cryptosystem is based on the assumption that factorization is a difficult problem from the number theoretic point of view. But that statement does not hold with regard to quantum computers where massive parallelization of computations leads to qualitative speedup. The Shor’s quantum factorization algorithm is one the most famous algorithms ever proposed. That algorithm has linear time complexity but is of probabilistic nature. It succeeds only when some random parameter fed at algorithm input has desired properties. It is well known that such parameters are found with probability not less than 1/2. However, the described in the paper numerical simulations prove that probability of such event exhibits grouping at some discrete levels above that limit. Thus, one may conclude that usage of the common bound leads to underestimation of the successful factorization probability. Empirical formulas on expected success probability introduced in the paper give rise to the more profound analysis of the Shor’s algorithm classic part behaviour. The observed grouping still awaits for explanations based on number theory.

Piotr Zawadzki
Mining Bluetooth Attacks in Smart Phones

The Bluetooth port of the Smart Phones is subject to threat of attacks of Bluesnarfing, Bluejacking and Bluebugging. In this paper, we aim to deal with mining these attacks. Having explained properties of the three attack types, we state three typical cases of them including SMS manipulation, phone book manipulation, and phone call initiation. According to characteristics of each attack type, we model the attacks by using Colored Petri-Nets and then mine the models. To show correctness of our models, we verify their liveness, fairness, and boundness properties. Finally, we mine models to analyze attacks.

Seyed Morteza Babamir, Reyhane Nowrouzi, Hadi Naseri
Users’ Acceptance of Secure Biometrics Authentication System: Reliability and Validate of an Extended UTAUT Model

This paper presents current findings from cross-cultural studies investigating the adoption of new secure technology, based on fingerprint authentication systems to be applied to an e-commerce websites, within the perspective of Saudi culture. The aim of the study was to explore factors affecting users’ acceptance of biometric authentication systems. A large scale laboratory experiment of 306 Saudis was performed using a login fingerprint system to observe whether Saudis are practically and culturally enthusiastic to accept this technology. The findings were then examined to measure the reliability and validity along with applying a proposed conceptual framework based on the Unified Theory of Acceptance and Use of Technology (UTAUT) with three moderating variables: age, gender and education level. The study model was adapted to meet the objectives of this study while adding other intrinsic factors such as self-efficiency and biometric system characteristics.

Fahad AL-Harby, Rami Qahwaji, Mumtaz Kamala
Two Dimensional Labelled Security Model with Partially Trusted Subjects and Its Enforcement Using SELinux DTE Mechanism

Personal computers are often used in small office and home environment for a wide range of purposes – from general web browsing and e-mail processing to processing data that are sensitive regarding their confidentiality and/or integrity. Discretionary access control mechanism implemented in the common general purpose operating systems is insufficient to protect the confidentiality and/or the integrity of data against malicious or misbehaving applications running on behalf of a user authorized to access the data.

We present a security model, based on the Bell-La Padula and Biba models, that provides for both confidentiality and integrity protection, and that uses a notion of partially trusted subjects to limit the level of trust to be given to the processes that need to to pass information in the normally forbidden direction. We discuss a way to enforce the model’s policy using SELinux mechanism present in current Linux kernels.

Jaroslav Janáček
A Roaming-Based Anonymous Authentication Scheme in Multi-domains Vehicular Networks

In vehicular networks, a vehicular user can communicate with peer vehicles or connect to Internet. A vehicular user could move across multiple access points belonging to either their home network domain or the foreign networks domain. In some cases, the real identity or location privacy may be disclosed or traced by malicious attackers. This poses challenges on privacy to the current vehicular networks. On the other hand, when a vehicular user drives to the foreign network, the foreign server has no idea to authenticate the vehicular user who is not the member of the foreign server. A roaming concept can be used to solve this problem. In this paper, we propose a privacy preserving authentication scheme in multi-domains vehicular networks. Our scheme considers three kinds of situations when the vehicular users communicate with the home network or foreign networks and give some approaches for them by combining polynomial-pool based key distribution scheme and roaming technology. The proposed authentication protocols are designed for preserving vehicular user’s real identity and location privacy. The security analysis and comparisons with the previous works are also discussed.

Chih-Hung Wang, Po-Chin Lee
Human Authentication Using FingerIris Algorithm Based on Statistical Approach

Biometric becomes nowadays, a strong tool to authenticate persons, because they are able to prove a true identity. Research shows that different applications are used in verification. Fingerprints, Face, Iris recognition are some examples. But most of them safer from FRR and FAR, so for those reasons more researches and new algorithms are needed to be developed and to solve this problem. This paper presents a system with an algorithm, uses a pair of biometrics print (fingerprint and Iris), used to gain access to personal resources, it is based on a statistical approach. Features are extracted and used to authenticate persons. Paper shows that the developed system solves the mentioned problem and accelerates matching process.

Ahmed B. Elmadani
Aerial Threat Perception Architecture Using Data Mining

This paper presents a design framework based on a centralized scalable architecture for effective simulated aerial threat perception. In this framework data mining and pattern classification techniques are incorporated. This paper focuses on effective prediction by relying on the knowledge base and finding patterns for building the decision trees. This framework is flexibly designed to seamlessly integrate with other applications.

The results show the effectiveness of selected algorithms and suggest that more the parameters are incorporated for the decision making for aerial threats; the better is our confidence level on the results. To delve into accurate target prediction we have to make decisions on multiple factors. Multiple techniques used together helps in finding the accurate threat classification and result in better confidence on our results.

M. Anwar-ul-Haq, Asad Waqar Malik, Shoab A. Khan
Payload Encoding for Secure Extraction Process in Multiple Frequency Domain Steganography

In this paper, we used a new technique for payload encoding in hided text in a bitmap image. The technique based on using an index of the dictionary representing the characters of the secret messages instead of the characters themselves. The technique uses multiple frequency domains for embedding these indexes in an arbitrary chosen bitmap image. By using discrete cosine transform DCT, discrete wavelets transform DWT, and a combination of both of them. We test the technique in a software package designed specially for implementing this technique, and we got very good results in terms of capacity of hiding, imperceptibility which are the most two important properties of steganography, the time of hiding the text, and the security issues, especially by using a new approach in payload encoding that gives the technique a powerful behavior in both encoding and extracting sides. Imperceptibility; improved by increasing PSNR (between 106.58 and 122.28 db), security; improved by using an encrypted Stego-key and secret message by the dictionary, capacity; improved by a factor = 1.3 by encoding the secret message characters with 6-bits only and embedding the secret message in all three color components RGB, efficiency; improved by having a embedding/ extraction time in msec (for example 256x256 Lena image it is 638 msec, in the worst case).

Raoof Smko, Abdelsalam Almarimi, K. Negrat
An Implementation of Digital Image Watermarking Based on Particle Swarm Optimization

The trade-off between the imperceptibility and robustness is one of the most challenges in digital watermarking system. To solve the problem, a digital image watermarking implementation of evolutionary algorithm in the discrete wavelet domain is presented. In the proposed scheme, the watermark is embedded in the vertical subband (

HL

m

) coefficients in wavelet domain. Furthermore, the proposed algorithm based on Particle Swarm Optimization is performed to train scaling factors to accomplish maximum the watermark strength while decrease the visual distortion. From experimental results, it demonstrates the robustness and the superiority of the proposed hybrid scheme.

Hai Tao, Jasni Mohamad Zain, Ahmed N. Abd Alla, Qin Hongwu
Genetic Cryptanalysis

In this work, Elitism Genetic Algorithm cryptanalysis for the basic substitution permutation network is implemented. The GA cryptanalysis algorithm gets the entire key bits. Results show the robustness of the proposed GA cryptanalysis algorithm.

Abdelwadood Mesleh, Bilal Zahran, Anwar Al-Abadi, Samer Hamed, Nawal Al-Zabin, Heba Bargouthi, Iman Maharmeh
Multiple Layer Reversible Images Watermarking Using Enhancement of Difference Expansion Techniques

This paper proposes a high capacity reversible image watermarking scheme based on enhancement of difference-expansion method. Reversible watermarking enables the embedding of useful information in a host signal without any loss of host information. We propose an enhancement of Difference Expansion technique whereby we can embed recursively into multiple layer of payload for grey scale and also RGB color scale, hence it increase capacity much better. The proposed technique improves the distortion performance at low embedding capacities and mitigates the capacity control problem. We also propose a reversible data-embedding technique with blind detection of watermark. This new technique exploits the selection of optimum block size to implement the algorithm. The experimental results for many standard test images show that multilevel of embedding increase the capacity when compared to normal difference expansion. There is also a significant improvement in the quality (PSNR) of the watermarked image, especially at moderate embedding capacities.

Shahidan M. Abdullah, Azizah A. Manaf
Modeling and Analysis of Reconfigurable Systems Using Flexible Nets

Reconfigurable systems, with dynamic structure at runtime, know a large usage in mobile code systems and mobile networks. Developing these systems and ensuring there correction can require the exploitation of formal tools. Petri Nets were well used in modeling and verification of these systems. High Level Petri Nets were proposed to model some aspects of these systems. In this paper, we propose a new formalism “Flexible Nets” with a high level dynamicity. This formalism will allow an easy and intuitive modeling of reconfigurable systems. The expressive power of the current formalism is due to the feature that all constituents of the net’s structure can be added or deleted during the execution of the net. This paper presents the formal definition of the formalism, gives an example on mobile code systems, and discusses some analysis issues of the current formalism.

Laid Kahloul, Allaoua Chaoui, Karim Djouani
Using Privilege Chain for Access Control and Trustiness of Resources in Cloud Computing

Cloud computing is emerging as a virtual model in support of “everything-as-a-service” (XaaS). There are numerous providers such as feeders, owners and creators who are less likely the same actor, and multiple platforms possibly with different security control mechanisms. Consequently, cloud resources cannot be securely managed by traditional access control models. In this paper, we propose a new security technique to enable a multifactor access control, and to cope with various deployment models where user’s network and system sessions may vary. Using the metadata of resources and access policies, the technique builds the privilege chains. The contribution of this paper includes a mechanism of the privilege chains that can be used to verify the trustiness of cloud resources and to protect the resources from unauthorized access.

Jong P. Yoon, Z. Chen

Social Networks

Modeling of Trust to Provide Users Assisted Secure Actions in Online Communities

Nowadays, the main problem is not the lack of high quality resources, but their retrieval, organization and maintenance. That is a challenge for the users in making the best and right decisions. The paper presents a model for trust, able to be used in various online communities. The model provides to each user from the community, a personalized secure context to manage significant resources. Therefore the model assures every user that his preferences are important and according to this the system furnished resources efficiently.

Lenuta Alboaie, Mircea-F. Vaida
A Collaborative Social Decision Model for Digital Content Credibility Improvement

This paper proposed a concept of harvesting the social network intelligence as high quality information source for decision-making. A collaborative social decision model (CSDM) is proposed for improving the digital content creditability on leisure e-service. This study investigates people’s original experiences and feedbacks form their social network relationships. Those personal perceptions and latest feedback information from social network are utilized for exploring high quality digital content. The unique information from social network could help user shape the digital content quality and bring alternative information sources of leisure e-service system. Contrast to traditional leisure service applications, the collaborative social decision model could improve digital content creditability and facilitate leisure e-service innovation.

Yuan-Chu Hwang
Improving Similarity-Based Methods for Information Propagation on Social Networks

Social networks are a means to deeply and quickly propagate information. However, in order not to vanish the advantage of information propagation, the amount of information received by users should not be excessive. A typical approach operates by selecting the set of contacts to which propagate the information on the basis of some suitable similarity notion, trying thus to reach only nodes potentially interested. The main limit of this approach is that similarity is not completely suitable to propagation since it is typically non-transitive, whereas propagation is a mechanism inherently transitive. In this paper we show how to improve similarity-based methods by recovering some form of transitive behaviour, through a suitable notion called

expectation

. The non-trivial combination of similarity and expectation in a nice mathematical framework, provides the user with a flexible tool able to maximize the effectiveness of information propagation on social networks.

Francesco Buccafurri, Gianluca Lax
Approaches to Privacy Protection in Location-Based Services

Location-based services (LBS) introduce serious privacy threats, which need to be addressed before the users and the service providers can get full benefit of these promising services. We addressed this challenge by reviewing and analysing privacy protection solutions proposed in the literature. Based on the analysis, we identified three general approaches for implementing privacy protection in LBS: privacy parameters, data disclosure control algorithms and information architectures. Implementation of specific privacy protection methods based on these approaches have still many unsolved challenges, such as data precision requirements of different service types and technical issues in information architectures. In addition, we see that user centric approach should be emphasised in the future privacy protection method development, in order to foster users’ trust to the services.

Anna Rohunen, Jouni Markkula
Social Media as Means for Company Communication and Service Design

Service development in companies can have a new form when using social media as a communication interface. This communication can occur between company and its customers, but also the company’s internal communication using social media services can prove beneficial. In this paper, we review and analyze the use of social media as a means for company communication in general and specifically in the case of customer involvement in service development. As a result, we present guidelines for of using social media tools in companies for improving the customer centric service development process.

Elina Annanperä, Jouni Markkula
A Problem-Centered Collaborative Tutoring System for Teachers Lifelong Learning: Knowledge Sharing to Solve Practical Professional Problems

Our works aim at developing a Web platform to allow teachers to share know-how and practices and capitalize on them for lifelong learning. In this context we propose a problem-centered tool based on the IBIS method initially developed for capturing design rationale. This tool aims at helping teachers to solve practical professional problems through a collaborative co-construction of solutions. A general model is given using among other things the three elements of the IBIS method (problem, position/solution, argument) and allowing multiple solutions for one problem. The problem description and solution construction steps are explained and the way teachers are connected from their profile is detailed. We also take into account a system of opinions and feedback about the usefulness of a solution. A use case is given to illustrate how the model is implemented.

Thierry Condamines
Bridging the Gap between Web 2.0 Technologies and Social Computing Principles

This research presents a brief review of the different definitions of Web 2.0 and presents the most important Web 2.0 Technologies that underlie the evolution of the Web. We map these Web 2.0 technologies to the Social Computing Principles and describe the different relations and patterns that occur. We argue that creating insight into the relations between Web 2.0 Technologies and Principles will help enable the creation of more successful services and accommodate a better understanding of Web 2.0 and its social aspects.

Giorgos Kormaris, Marco Spruit

Ontology

Using Similarity Values for Ontology Matching in the Grid

This work targets the issue of ontology matching in a grid environment. For this, similarity values are considered in order to qualify the similarity between several ontologies, concepts and properties. Based on the similarity values a matching is proposed in order to extend one priority ontology. The ontology matching is proposed in a grid environment with the aim to manage ontology matching for big data sets at same time.

Axel Tenschert
Rapid Creation and Deployment of Communities of Interest Using the CMap Ontology Editor and the KAoS Policy Services Framework

Sharing information across diverse teams is increasingly important in military operations, intelligence analysis, emergency response, and multi-institutional scientific studies. Communities of Interest (COIs) are an approach by which such information sharing can be realized. Widespread adoption of COIs has been hampered by a lack of adequate methodologies and software tools to support the COI lifecycle. After describing this lifecycle and associated dataflows, this article defines requirements for tools to support the COI lifecycle and presents a prototype implementation. An important result of our research was to show how consistent use of ontologies in the COI support tools could add significant flexibility, efficiency, and representational richness to the process. Our COI-Tool prototype supports the major element of the COI lifecycle through graphical capture and sharing COI configurations represented in OWL through the IHMC CMap Ontology Editor (COE), facilitation of the COI implementation through integration with the AFRL Information Management System (IMS) and IHMC’s KAoS Policy Services framework, and the reuse of COI models. In order to evaluate our tools and methodology, an example METOC (weather) community of interest was developed using it.

Andrzej Uszok, Jeffrey M. Bradshaw, Tom Eskridge, James Hanna
Incorporating Semantics into an Intelligent Clothes Search System Using Ontology

The research aims to develop a novel ontology structure for incorporating semantics into an intelligent clothes search system. It helps users to choose appropriate attire fit to the desired impression for attending a specific occasion using all of what is already in their closets. The ontology structure proposed in the paper takes a single piece of garment as the most specific instance. Hence, pictures of user’s garment items are firstly fed. An internal image processing module is then invoked to extract attributes from each piece of garment through a series of image processing techniques, such as color distribution. Finally, the derived information is attached to the ontology. In runtime, the system follows the requirements, such as mood keyword, occasion type, and weather status, to search for matched outfits with the help of semantic searching of the ontology.

Ching-I Cheng, Damon Shing-Min Liu, Li-Ting Chen
SPPODL: Semantic Peer Profile Based on Ontology and Description Logic

The main purpose of this work is to propose a semantic and formal profile to the peer called SPPODL. We present an approach that utilizes the formal ontology based on description logic and OWL language in order to create this rich profile. The creation of this ontology follows a clear and complete process that guarantees the quality of the final result. SPPODL can be helpful in lots of domains, such as social networks, interoperability between heterogeneous peer platforms and data integration in P2P environment.

Younes Djaghloul, Zizette Boufaida
Ontology Based Tracking and Propagation of Provenance Metadata

Tracking the provenance of application data is of key importance in the network environment due to the abundance of heterogeneous and controllable resources. We focus on ontologies as a mean of knowledge representation and present a novel approach to representation of provenance metadata in knowledge bases, relying on an OWL 2 design pattern. We also outline an abstract method of propagation of provenance metadata during the reasoning process.

Miroslav Vacura, Vojtěch Svátek

Real Time Biometric Solutions for Networked Society

A Real-Time In-Air Signature Biometric Technique Using a Mobile Device Embedding an Accelerometer

In this article an in-air signature biometric technique is proposed. Users would authenticate themselves by performing a 3-D gesture invented by them holding a mobile device embedding an accelerometer. All the operations involved in the process are carried out inside the mobile device, so no additional devices or connections are needed to accomplish this task. In the article, 34 different users have invented and repeated a 3-D gesture according to the biometric technique proposed. Moreover, three forgers have attempted to falsify each of the original gestures. From all these in-air signatures, an Equal Error Rate of 2.5% has been obtained by fusing the information of gesture accelerations of each axis X-Y-Z at decision level. The authentication process consumes less than two seconds, measured directly in a mobile device, so it can be considered as “real-time”.

J. Guerra Casanova, C. Sánchez Ávila, A. de Santos Sierra, G. Bailador del Pozo, V. Jara Vera
On-Demand Biometric Authentication of Computer Users Using Brain Waves

From the viewpoint of user management, on-demand biometric authentication is effective for achieving high-security. In such a case, unconscious biometrics is needed and we have studied to use a brain wave (Electroencephalogram: EEG). In this paper, we examine the performance of verification based on the EEG during a mental task. In particular, assuming the verification of computer users, we adopt the mental task where users are thinking of the contents of documents. From experimental results using 20 subjects, it is confirmed that the verification using the EEG is applicable even when the users are doing the mental task.

Isao Nakanishi, Chisei Miyamoto
Encrypting Fingerprint Minutiae Templates by Random Quantization

An encryption method is proposed to use random quantization to generate diversified and renewable templates from fingerprint minutiae. The method first achieves absolute pre-alignment over local minutiae quadruplets (called minutiae vicinities) in the original template, resulting in a fixed-length feature vector for each vicinity; and second quantizes the feature vector into binary bits by random quantization; and last post-processes the resultant binary vector in a length tunable way to obtain a protected minutia. Experiments on the fingerprint database FVC2002DB2_A demonstrate the desirable biometric performance achieved by the proposed method.

Bian Yang, Davrondzhon Gafurov, Christoph Busch, Patrick Bours

Web Applications

Method for Countering Social Bookmarking Pollution Using User Similarities

In this paper, we propose a method for countering social bookmark pollution. First, we investigate the characteristics of social bookmark pollution and show that high similarities in the user bookmarks result in social bookmark pollution. Then, we discuss a bookmark number reduction method based on user similarities between the user bookmarks. We evaluate the proposed method by applying it to Hatena Bookmark. It is found that the proposed method only slightly reduces the bookmark number of the Web pages that are not affected by social bookmark pollution but greatly reduces the bookmark number of those Web pages that are affected by social bookmark pollution.

Takahiro Hatanaka, Hiroyuki Hisamatsu
A Human Readable Platform Independent Domain Specific Language for WSDL

The basic building blocks of SOA systems are web services. WSDL, the standard language for defining web services, is far too complex and redundant to be efficiently handled by humans. Existing solutions use either graphical representations (UML, etc.), which are again inefficient in large scale projects, or define web services in the implementation’s native language, which is a bottom-up approach risking interface stability. Both lack support for concepts like conditions, access-rights, etc. The domain specific language introduced in this paper uses a Java and C#-like language for describing web service interfaces. It has the same descriptive power as WSDL while maintaining simplicity and readability. Examples show how to use the language, and how it can be compiled into WSDL.

Balazs Simon, Balazs Goldschmidt
A Human Readable Platform Independent Domain Specific Language for BPEL

The basic building blocks of SOA systems are web services. High-level service orchestration is usually achieved by defining processes in BPEL. The available development environments, however, usually have visual tools for BPEL handling. The problem with this is that they are not satisfactory when efficiency, repeatability, and manageability is necessary. The domain specific language introduced in this paper uses a Java and C#-like language for describing web service interfaces and BPEL processes. It has the same descriptive power as WSDL and BPEL while maintaining simplicity and readability. Examples show how to use the language, and how it can be compiled into BPEL process descriptions.

Balazs Simon, Balazs Goldschmidt, Karoly Kondorosi
Impact of the Multimedia Traffic Sources in a Network Node Using FIFO scheduler

The recent substantial growing of video and voice traffics in the Internet may cause a great impact on the conception and dimensioning of a network node designed to attend only data traffic. The objective of this paper is to investigate how these traffics can affect the present network which works with FIFO scheduler. For this purpose, a simulation platform consisted of several different kinds of sources, a buffer and a FIFO scheduler was developed in C++ language. Many different scenarios of traffic compositions were simulated for the study of the network node behavior in relation to queue and system times which are important parameters for QoS definition. The results showed that video traffic can impact very strongly all the network infra-structure including the whole congestion of system. The results also showed that voice traffic has less impact on the network but in a mixed operation of traffics, the system cannot guarantee the QoS of voice traffic which needs almost real time treatment. The main conclusion of this paper is that the admission control mechanism and QoS provisioning need to be implemented urgently to cope with the fast growing of video and voice traffics.

Tatiana Annoni Pazeto, Renato Moraes Silva, Shusaburo Motoyama
Assessing the LCC Websites Quality

The airline industry has been witness to increased additional turbulence as a result of the entry of airlines adopting new business models referred to as Low Cost Carriers (LCC), no-frills airlines or budget carriers. Whereas some budget carriers, such as Southwest airlines, have been competing in the US market for over 30 years, they have only been flourishing in Europe for the last 10 years and are at an early development stage in the Middle East, Asia Pacific and the rest of the world. This has accelerated customer acceptance of the Internet as a suitable medium for booking airline travel. Therefore, Web site quality is now considered a critical factor to attract customers’ attention and build loyalty. This paper presents a modified assessment model to assist Middle Eastern LCC companies in evaluating their websites by examining it against four main categories that include 36 evaluation factors.

Saleh Alwahaishi, Václav Snášel
Expediency Heuristic in University Conference Webpage

In this paper, we present a webpage that has been developed based on heuristic elements which refer to the International Conference Software Engineering and Computer Systems for requirement. Heuristic and web base method have been applied in this system with combination of PHP language and MySQL as database system. The objective of this paper is the development of a webpage that has been developed based on heuristic elements which refer to university conference is to simplify and get quality webpage in using conference webpage for the Faculty of Computer Systems and Software Engineering, University Malaysia Pahang in Malaysia. Besides minimize human contacts thus provide fast, efficient, transparent and effective service to author, admin and reviewer. Therefore this system can facilitate user by computerized all the form accordance to paper submission, review papers, download paper, assign reviewer through heuristic online university conference webpage.

Roslina Mohd Sidek, Noraziah Ahmad, Mohamad Fadel Jamil Klaib, Mohd Helmy Abd Wahab
Backmatter
Metadata
Title
Networked Digital Technologies
Editors
Filip Zavoral
Jakub Yaghob
Pit Pichappan
Eyas El-Qawasmeh
Copyright Year
2010
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-14292-5
Print ISBN
978-3-642-14291-8
DOI
https://doi.org/10.1007/978-3-642-14292-5