Skip to main content
Top

2013 | Book

The 9th International Conference on Computing and InformationTechnology (IC2IT2013)

9th-10th May 2013 King Mongkut's University of Technology North Bangkok

Editors: Phayung Meesad, Herwig Unger, Sirapat Boonkrong

Publisher: Springer Berlin Heidelberg

Book Series : Advances in Intelligent Systems and Computing

insite
SEARCH

About this book

This volume contains the papers of the 9th International Conference on Computing and Information Technology (IC2IT 2013) held at King Mongkut's University of Technology North Bangkok (KMUTNB), Bangkok, Thailand, on May 9th-10th, 2013. Traditionally, the conference is organized in conjunction with the National Conference on Computing and Information Technology, one of the leading Thai national events in the area of Computer Science and Engineering. The conference as well as this volume is structured into 3 main tracks on Data Networks/Communication, Data Mining/Machine Learning, and Human Interfaces/Image processing.

Table of Contents

Frontmatter
Bipolarity in Judgments and Assessments: Towards More Realistic and Implementable Human Centric Systems

We are concerned with the conceptualization, analysis and design of human centric systems. Such systems are meant, roughly speaking, as those in which a human being is a relevant (if not principal) element of a computer based system, and – by obvious reasons – there exists an inherent communication and articulation gap between the human and the computer implied first of all by different languages employed, i.e. strings of bits and natural language. Some human-computer interface (HCI) should therefore be employed to bridge that gap. Its very essence and purpose boils down to the use of most human consistent tools, techniques and solutions assuming that it is easier to change the machine than the adult human being. Obviously, there is a multitude of possible options in this respect and in our talk we consider one that is related to a proper representation and processing of human judgments and assessments that are crucial while considering any human – computer interaction. In this context we consider a known fact that when a human being is requested to provide a judgment or assessment concerning an option or course of action, he or she very often tends to to provide them in a bipolar version. This is meant in the talk in the sense of providing testimonies concerning separately positive and negative aspects (pros and cons), mandatory and optional conditions to be fulfilled, etc. To start with, we review two types of scales employed to quantified such type of bipolar testimonies. The first is the bipolar univariate scale, in which there is a neutral (0) point, and the negative part [0-1], related to a negative testimony, and a positive part [0,1], related to a positive testimony. The second is the unipolar bivariate scale in which two separate scales are employed, both with values in [0,1], expressing separately the positive and negative testimonies. The main problem is how to aggregate the positive and negative testimonies related to an option in question. We present two basic approaches, one that is decision theoretic and is based on some special multicriteria decision making scalarizing functions, and one which is logical and is based on multivalued logic with properly chosen definitions of logical operations. We present applications of these approaches to database querying and information retrieval, and to a multicriteria choice of design options of computer systems. We advocate such a bipolar setting and outline some possible future direction.

Janusz Kacprzyk
Critical Issues and Information Security and Managing Risk

Threat vectors against information systems are constantly changing and increasing in both diversity and frequency. This talk will review the latest threats to global information assets and mechanisms to assess risk exposure and mitigation approaches. Using examples from academia, industry, personal experience, and audience members; a spotlight will be cast on the major vulnerabilities that pervade our daily lives.

Appropriate access to most information technology resources inherently requires some risk. Assessing, eliminating, mitigating, and accepting risk then become functions that are necessarily performed by both individuals and organizations. Just as the threats themselves are misunderstood, so too are each of these four risk management elements often mismanaged. We’ll explore structures to address each element, common theoretical and practical errors in application, and how these gaps might be closed by a different approach or through future research.

Finally, we’ll review how the very actions that expose individuals and companies to significant risk may be exploited to thwart and prosecute criminals, by looking at recent approaches in digital forensics.

Mark Weiser
Visualising the Data: Now I Understand

Visualising information is increasing in situations where complex events are being presented to people who often have no understanding of what has/could have happened, procedures, methodologies or science. Computer Graphics (CG) can visually present scenarios based on scientific methodologies as well as depicting the perception of a witness to show what may have occurred. But more importantly CG can illustrate “what if...” questions and explore the inconsistencies and discrepancies within evidence and expert witness testimony. Therefore representing an important development in forensic graphics that are unparalleled due to its ability to assimilate and analyse data. However it is very important that when we use “science” to determine the validity of evidence or information that is done in a manner that is acceptable to the scientific community.

Ken Fowle
Improved Computational Intelligence through High Performance Distributed Computing

Modern computational and business intelligence techniques are increasingly used for product design and complex decision making. However, as the problems they solve become more difficult, the need for high performance computing becomes increasingly important. Modern Cloud and Grid platforms provide an ideal base for supporting this work, but typically lack software support. Over the past 20 years we have developed a computational framework, called Nimrod that allows users to pose complex questions underpinned by simulation technologies. Nimrod allows users to perform what-if analysis across an enormous number of options. Nimrod also supports experimental design techniques and provides automatic optimisation algorithms (e.g. genetic algorithms) that search through design options. Nimrod has been used globally to across a wide range of application in science, environmental modelling, engineering and business.

In this talk I will describe the Nimrod framework, and show examples of how it has supported a range of scientific and engineering questions. I will show how Clouds and Grids support these case studies, and outline our continued research in the area.

David Abramson
Lying in Group-Communication in El-Farol-Games Patterns of Successful Manipulation

The El-Farol-Bar-Problem is a well-established tool for behavior analysis. Recent studies focused on group communication in minority games of this kind. We introduce the possibility of lying to the behavior of agents in communicating groups. We found a successful strategy for lying, if the group is composed by specific characters.

Frank Großgasteiger, Coskun Akinalp
Identifying Limbic Characteristics on Twitter

An adaptive, intelligent system requires certain knowledge of its users. Patterns of behavior, preferences, and motives for decision making must be readily identifiable for the computer to react to. So far, typifying of users needed either a huge collection of empirical data via questionnaires or special hardware to track the user’s behavior. We succeeded to categorize users by analyzing only a small amount of the data trace a user leaves while using the online social network (OSN) Twitter. Our approach can be adapted to other platforms easily. Thus, human behavior is made understandable for computer systems and will help to improve the engineering of human-computer-interactions.

Christine Klotz, Coskun Akinalp
Mobile Agent Administrator for Computers over Network (MAACN): Using the Concept of Keyboard Buffer

This paper proposes an idea of using the concept of Keyboard Buffer for performing Network Administrator and Computer System Administrator activities by passing the text commands to computers for performing system functions without the need for sharing or transferring the script files or the need for the administrators to personally visit the computer systems over the network. This paper also describes a system that puts the proposed idea into practice by using Java Aglets: - Mobile Agent Technology as a platform. The proposed idea is an extension to the system “The Agent Admin: Administration of Remote nodes on huge Networks using Mobile Agent Technology” that performs few administration activities like retrieving systems information, list of softwares installed, list of active applications and software installation by sharing the files over the network. The performance of “The Agent Admin” was tested, analyzed and enhanced with the Commander Agent.

Rajesh Dontham, Srinivas R. Mangalwede, Subodh Ingleshwar, Manoj A. Patil, Arun W. Kumar
Impact of Multi-services over Service Provider’s Local Network Measured by Passive and Active Measurements Techniques

To measure service provider’s local network performance has to use specific equipment and expert system which is too expensive for service provider to install all over the country. The active measurement system use only active monitoring techniques to measure quality of network and expert system use only passive monitoring techniques to analyze problem of network. To find out the problems, provider has to install both systems which are too complex and less cost effective. We introduce new measurement system, which is made by low cost embedded system, combined both passive and active techniques which have enough capability for service provider to monitor or find out the problems occurred in company’s local network. Our system was deployed to service provider’s local network for prototyping.

Titinan Bhumrawi, Chayakorn Netramai, Kamol Kaemarungsi, Kamol Limtanyakul
Performance Evaluation of LEACH on Cluster Head Selection Techniques in Wireless Sensor Networks

Toward the advances of wireless sensor technology have allowed users/administrators to simply and accurately monitor a characteristic and a behavior of remote environment including automatic event trigger. Due to the fact that the limitation of energy resource of sensor nodes, any events should be aware of this constrain, and so dividing the nodes to perform a particular task, or network clustering, is necessary to prolong the network system lifetime. Thus, in this paper, we evaluate the performance of a variety of LEACH optimization techniques, especially on the clustering criteria to select a proper set of cluster heads. Finally, to improve the probability of the selected nodes, we also propose the use of moving energy window average into an energy factor computation resulting into an improvement of the system energy usage.

Chakchai So-In, Kanokporn Udompongsuk, Comdet Phudphut, Kanokmon Rujirakul, Chatchai Khunboa
Outbound Call Route Selection on IP-PBX Using Unstructured Supplementary Service Data

Selecting the outbound call route on IP-PBX phone system using Fixed Channel Technique and Pattern Matching has been found that both methods are not able to choose the outbound route through mobile phone service providers, and cause high traffic between GSM network operators. Therefore, this research proposed a using of Unstructured Supplementary Service Data or USSD technique. The experiment was carried out by using Asterisk Server as an IP-PBX system, which was connected with Sim3000CZ Micro Controller board to select the route. The results of this research show that this method can select the outbound call route correctly and efficiently as well as reduce the traffic between the networks. Furthermore, it can greatly reduce the overhead when calling an external network. Therefore, we claim that this is a more suitable technique for the call route selection.

Kittipong Suwannaraj, Sirapat Boonkrong
Using Routing Table Flag to Improve Performance of AODV Routing Protocol for VANETs Environment

With the growth of wireless topology during recent decades, routing protocol in Ad-hoc networks has come to the limelight. Routing is defined as a technique of the best route from the source to the destination. VANETs comprise a special subclass of Mobile Ad Hoc Networks (MANETs). One of the hardships with routing protocols in VANET is related to do with HELLO message which brings about a route. Among the most common solutions is one that utilizes Routing Table Flag for checking the route status. By checking the Routing Table Flag (RTF) each node’s condition will be specified. Through considering this mode, various acceptable results regarding packet loss, packet delivery ratio, throughput and number of received packets have been achieved. Experiments using NS-2 to measure the packet loss, throughput, number of receive packets and packet delivery ration for three different scenarios are presented.

Hassan Keshavarz, Rafidah Md. Noor, Ehsan Mostajeran
Wireless Sensor Network Planning for Fingerprint Based Indoor Localization Using ZigBee: Empirical Study

Technology defined by ZigBee standard is intended for a wide range of ad hoc wireless sensor network (WSN) applications. Among which is the location-aware services which can be applied to both indoor and outdoor environments for locating expensive equipments or tracking any moving objects. While there are many existing localization algorithms, the fingerprint technique which relies on determining target location from an off-line empirical database seems to be the most practical indoor solution using off-the-shelf products. In this work, we present a wireless sensor network planning solution suitable for indoor localization using fingerprint technique. Based on our extensive feasibility studies, we derived several network planning solutions which answer some of the key wireless sensor network design questions such as (1) where to put the router or anchor nodes?, (2) how many routers should we use in designing location-aware WSN?, (3) how often should the end-device node transmit data to the server?, (4) what should be a suitable packet size?, and (5) does mobility have any impact on the network performance?

Nawaporn Wisitpongphan
CAPWAP Protocol and Context Transfer to Support Seamless Handover

Nowadays, real-time applications become immensely popular and available over wireless devices. An emerging technology IEEE 802.11 Wireless Local Area Network (WLAN) has brought users to have trendy gadgets like smart phones, tablets and laptops. These applications require a fast and efficient handover that guarantee a service of security and Quality of Service (QoS). However, the users will face disruption which can cause high transition delay while running multimedia services. This paper presents an overview of Control and Provisioning Wireless Access Point (CAPWAP) protocol; a solution to overcome security problem during handover within a large network. Furthermore, we conducted a survey in this paper in order to find an algorithm that can support seamless handover in centralized architecture. Overall, the paper ends with further work by suggesting a design of CAPWAP protocol using predictive context transfer in centralized architecture.

Siti Norhaizum M. Hasnan, Media A. Ayu, Teddy Mantoro, M. Hasbullah Mazlan, M. Abobakr, A. Balfaqih, Shariq Haseeb
Intelligent Cloud Service Selection Using Agents

One of the most recent developments within computer science is cloud computing which provides services (power, storage, platform, infrastructure etc.). Many clouds provide services are based on cost, efficiency, performance, and quality. Stakeholders have to compromise cost sometimes and performance or quality other times. Provision of the best quality based services to its stakeholders and to impart intelligence, agents can play important roles especially by learning the structure of the clouds. Agents can be trained to observe differences and behave intelligently for service selection. To rank different clouds, we propose a new technique performance factor for the provision of services based on intelligence. The research objective is to enable cloud users in selecting cloud service according to their own requirements. The technique assigns performance factor for each service provided by cloud and ranks it as whole. By doing so, quality of the services can be highly improved. We validate our approach with a case study, which emphasizes the need to rank cloud services of widely spreading and complex domains.

Imran Mujaddid Rabbani, Aslam Muhammad, Martinez Enriquez A.M.
A New Method of Privacy Preserving Computation over 2-Part Fully Distributed Data

In this paper, we propose a new protocol of privacy preserving frequency computation in 2-part fully distributed data (2PFD). This protocol are practical than of previous protocol. More specifically, we achieve a protocol that can be done in situations with various number of users and larger than a given threshold.

The Dung Luong, Dang Hung Tran
Enhancing the Efficiency of Dimensionality Reduction Using a Combined Linear SVM Weight with ReliefF Feature Selection Method

The purpose of this research is to propose a feature selection technique for improving the efficiency of dimensionality reduction. The proposed technique is based on a combined Linear SVM Weight with ReliefF. SVM is used as a classifier. The Leukemia and DLBCL dataset from UCI Machine learning Repository were used for our experiments. We discovered that the combined Linear SVM Weight with ReliefF feature selection technique could provide 100 percent accurate result for the model. There was a significant reduction from 5,147 to 20 dimensional data, which is much more efficient than using Linear SVM Weight or ReliefF alone.

Wipawan Buathong, Phayung Meesad
Dhaka Stock Exchange Trend Analysis Using Support Vector Regression

In this study we combine support vector machine (SVM) and windowing operator in order to predict share market trend as well as the share price. The instability of the time series data is one of the main reasons to lead to decrease of prediction accuracy in this analysis. On the other hand, some special SVM parameters such as c,

ε

, g should be carefully determined to gain high accuracy. In order to solve this problem mentioned above we use windowing operator as preprocess in order to feed the highly reliable input to SVM model. And train the model in iterative process such that we can find out the best combination of SVM parameters. This study is done on some listed company of Dhaka stock exchange (DSE), Bangladesh. And the training and testing data sets are real time values are collected from DSE. Four years historical data (2009-2012) are used in this analysis. And finally, we compare the output with the real time trend from DSE.

Phayung Meesad, Risul Islam Rasel
Ontology-Driven Automatic Generation of Questions from Competency Models

The paper explores some pedagogical affordances of machine-processable competency models. Self-assessment is a crucial component of learning. However, creating effective questions is time-consuming because it may require considerable resources and the skill of critical thinking. There are very few systems currently available which generate questions automatically, and these are confined to specific domains. Using ontologies and Semantic Web technologies certain limitations in automation, integration, and reuse of data across diverse applications can be overcome. This paper presents a system for automatically generating questions from a competency framework. This novel design and implementation involves an ontological database that represents the intended learning outcome to be assessed across a number of dimensions, including the level of cognitive ability and the structure of the subject matter. This makes it possible to guide learners in developing questions for themselves, and to provide authoring templates which speed the creation of new questions for self-assessment. The system generates a list of all the questions that are possible from a given learning outcome. Such learning outcomes were collected from the INFO1013 ‘IT Modeling’ course at the University of Southampton. The way in which the system has been designed and evaluated is discussed, along with its educational benefits.

Onjira Sitthisak, Lester Gilbert, Dietrich Albert
A Comparative Study on Handwriting Digit Recognition Classifier Using Neural Network, Support Vector Machine and K-Nearest Neighbor

The aim of this paper is to analyze efficiency of three classifiers which will be experimented and compared to find out the best techniques. They were experimented on a standard database of handwritten digit. However, not only recognition rate is considered, but also other issues (ex. error rate, misclassified image rate and computing time) will be analyzed. The presented results show that SVM is the best classifier to recognize handwritten digits. That is, the highest recognition rates (96.93%) are obtained. But the computing time of training is the main problem for them. Conversely, other methods, like neural networks, give insignificantly worse results, but their training is much quicker. However, all of the techniques also represent an error rate of 1–4% because of confusionwithdigits 1 and 7 or 3, 5 and 8 respectively.

Chayaporn Kaensar
Printed Thai Character Recognition Using Standard Descriptor

The various font-types, font-sizes, and font-styles have a great impact on recognition performance of optical character recognition (OCR) systems. This becomes a grand challenge for recognition improvement. In order to enhance the performance, this paper proposes the printed Thai character recognition using a standard descriptor. The descriptor construction consists of two principal phases—preprocessing and feature extraction. In the former phase, the preprocessing provides a standard form for each character image. In the latter phase, the singular value decomposition (SVD) is applied to all font-type, fontsize, and font-style character images to extract features. Then the standard descriptor is constructed from the suitable order selection of the SVD feature decomposition. Finally, the projection matrix technique is applied to the recognition phase in order to measure the cosine similarity between the standard descriptor and test set. The experimental results show that the proposed method achieves a high recognition rate and is invariant to font-types, font-sizes, and font-styles.

Kuntpong Woraratpanya, Taravichet Titijaroonrog
Development of an Image Processing System in Splendid Squid Grading

Quality inspection of commercial squids is a labor intensive process. This study proposes an approach to develop a computer vision system for size and specie classification of squids. Both species were differentiated by distinction of mantle shapes. A Multi Layer Perceptron (MLP) with a back propagation algorithm was used to sort squid samples to pre-defined sizes based on a standard of National Bureau of Agricultural Commodity and Food Standards (ACFS). Features extracted from squid images including area, perimeter and length of the squid mantle were used as parameters into the network. Differences between species could be distinguished by using a ratio of length and width of the squid mantle. Results showed that approximately 90 classification accuracy could be achieved from the approach proposed in this study.

Nootcharee Thammachot, Supapan Chaiprapat, Kriangkrai Waiyakan
Cardiac Auscultation with Hybrid GA/SVM

Cardiac Auscultation is the act of listening to a heart sound with the purpose to analyze the condition of a heart. This paper proposes an alternative screening system for patients using a hybrid GA/SVM. GA/SVM technique will allow the system to be able to classify the heart sound base on the heart condition with high accuracy by using GA in a feature selection part of the system. This method improves the training input samples of SVM resulting in a better trained SVM to classify the heart sound. GA in the system is used to generate the best set of weighing factor for the processed heart sound samples. The system will be low cost but has high accuracy.

Sasin Banpavichit, Waree Kongprawechnon, Kanokwate Tungpimolrut
Real-Time FPGA-Based Human Iris Recognition Embedded System: Zero-Delay Human Iris Feature Extraction

Nowadays most of iris recognition algorithms are implemented based on sequential operations running on central processing units (CPUs). Conventional iris recognition systems use a frame grabber to capture a high quality image of an eye, and then system shall locate the pupil and iris boundaries, unwrap the iris image, and extract the iris image features. In this article we propose a prototype design based on pipeline architecture and combinational logic implemented on field-programmable gate array (FPGA). We achieved to speed up the iris recognition process by localizing the pupil and iris boundaries, unwrapping the iris image and extracting features of the iris image while image capturing was in progress. Consequently, live images from human eye can be processed continuously without any delay or lag. We conclude that iris recognition acceleration by pipeline architecture and combinational logic can be a complete success when it is implemented on low-cost FPGAs.

Amirshahram Hematian, Suriayati Chuprat, Azizah Abdul Manaf, Sepideh Yazdani, Nadia Parsazadeh
Reconstruction of Triple-wise Relationships in Biological Networks from Profiling Data

Reconstruction of biological networks from profiling data is one of the most challenges in systems biology. Methods that use some measures in information theory to reconstruct local relationships in biological networks are often preferred over others due to their simplicity and low computation cost. Present mutual information-based methods cannot detect as well as provide relationships that take into account more than two variables (called multivariate relationships or

k

-wise relationships). Some previous studies have tried to extend mutual information from two to multiple variables; however the interpretation of these extensions is not clear. We introduce a novel interpretation and visualization of mutual information between two variables. With the new interpretation, we then extend mutual information to multiple variables that can capture different categories of multivariate relationships. We illustrate the prediction performance of these multivariate mutual information measures in reconstructing three-variable relationships on different benchmark networks.

Quynh Diep Nguyen, Tho Hoan Pham, Tu Bao Ho, Van Hoang Nguyen, Dang Hung Tran
SRA Tool: SOFL-Based Requirements Analysis Tool

Capturing requirements in order to produce a complete and accurate specification for developing software systems requires a deep understanding of domain knowledge. It becomes more difficult if the system analyst is dealing with a large-scale system specification without specific guidance on how to construct a specification. At the same time the system analyst is expected to be an expert in the formal language used for the presentation of the specification. Therefore, constructing requirement specification has become a big challenge to the system analyst. In this paper, we present a tool suite for guiding a system analyst in constructing the specification. This tool manipulates the knowledge to guide a system analyst at the requirements analysis stage. The guiding process requires two steps: informal and semi-formal steps. It was indicated that our approach did facilitate guiding the process of constructing the specification during the requirements analysis.

A. R. Mat, A. B. Masli, N. H. Burhan, S. Liu
Egyptian Vulture Optimization Algorithm – A New Nature Inspired Meta-heuristics for Knapsack Problem

In this paper we have introduced for the first time a new nature inspired meta-heuristics algorithm called Egyptian Vulture Optimization Algorithm which primarily favors combinatorial optimization problems. The algorithm is derived from the nature, behavior and key skills of the Egyptian Vultures for acquiring food for leading their livelihood. These spectacular, innovative and adaptive acts make Egyptian Vultures as one of the most intelligent of its kind among birds. The details of the bird’s habit and the mathematical modeling steps of the algorithm are illustrated demonstrating how the meta-heuristics can be applied for global solutions of the combinatorial optimization problems and has been studied on the traditional 0/1 Knapsack Problem (KSP) and tested for several datasets of different dimensions. The results of application of the algorithm on KSP datasets show that the algorithm works well w.r.t optimal value and provide the scope of utilization in similar kind of problems like path planning and other combinatorial optimization problems.

Chiranjib Sur, Sanjeev Sharma, Anupam Shukla
Optical Music Recognition on Windows Phone 7

Optical Music Recognition (OMR) software currently in the market are not normally designed for music learning and ad hoc interpretation; they usually require scanned input of music scores to perform well. In our work, we aimed to remove this inconvenience by using photos captured by mobile phone’s camera as the input. With the cloud-based architecture and the design without the assumption of perfect image orientation and lighting condition, we were able to eliminate many of the software’s architectural and algorithmic problems while still maintaining an overall decent performance.

Thanachai Soontornwutikul, Nutcha Thananart, Aphinun Wantanareeyachart, Chakarida Nukoolkit, Chonlameth Arpnikanondt
Applying Smart Meter and Data Mining Techniques to Predict Refrigeration System Performance

This study presents six data mining techniques for prediction of coefficient of performance (COP) for refrigeration equipment. These techniques include artificial neural networks (ANNs), support vector machines (SVMs), classification and regression tree (CART), multiple regression (MR), generalized linear regression (GLR), and chi-squared automatic interaction detector (CHAID). Based on COP values, abnormal situation of equipment can be evaluated for refrigerant leakage. Experimental results from cross-fold validation are compared to determine the best models. The study shows that data mining techniques can be used for accurately and efficiently predicting COP. In the liquid leakage phase, ANNs provide the best performance. In the vapor leakage phase, the best model is the GLR model. The models built in this study are effective for evaluating refrigeration equipment performance.

Jui-Sheng Chou, Anh-Duc Pham
Extended Knowledge Management Capability Model for Software Process Improvement

Software Process Improvement (SPI) is always included as a necessary activity in IT organizations that give importance to the process, as it has been proven from its application in many organizations that good process will likely bring good product and project performance. While performing under SPI, knowledge usually occurs from different sources, in different formats and from different activities. Therefore, managing knowledge is a challenging issue for software organizations. This paper presents an Extended Knowledge Management Capability Model (EKMCM) for SPI, focusing on the Supplier Agreement Management (SAM) Process Area of CMMI. The proposed EKMCM is a set of defined processes classified into five levels integrating three aspects: the knowledge management process, human resource development and the use of IT to support the process. The proposed EKMCM model can be applied in other CMMI processes. Organizations may choose to perform at any knowledge management level according to their readiness.

Kamolchai Asvachaiporn, Nakornthip Prompoon
Record Searching Using Dynamic Blocking for Entity Resolution Systems

Entity Resolution also known as data matching or record linkage, is the task of identifying records from several databases that refer to the same entities. The efficiency of a blocking method is hindered by large blocks since the resulting number of record pairs is dominated by the sizes of these large blocks. So, the researchers are still doing researches on handling the problems of large blocks. Same blocking methods can yield bipolar results against different datasets, selecting a suitable blocking method for the given record linkage algorithm and dataset requires significant domain knowledge. Many researches in entity resolution has concentrated on either improving the matching quality, making entity resolution scalable to very large databases, or reducing the manual efforts required throughout the resolution process. In this paper, we propose an efficient record searching using dynamic blocking in entity resolution systems.

Aye Chan Mon, Mie Mie Su Thwin
A New Fast and Robust Stereo Matching Algorithm for Robotic Systems

In this paper, we propose a new area-based stereo matching method by improving the classical Census transform. It is a difficult task to match corresponding points in two images taken by stereo cameras, mostly under variant illumination and non-ideal conditions. The classic Census nonparametric transform did some improvements in the accuracy of disparity map in these conditions but it also has some disadvantages. The results were not robust under different illumination, and because of the complexity the performance was not suitable for real-time robotic systems. In order to solve these problems, this paper presents an improved Census transform using Maximum intensity differences of the pixel placed in the center of a defined window and the pixels in the neighborhood to reduce complexity and obtain better performance and needs a smaller window size to obtain best accuracy compared to the Census transform. Experimental results show that the proposed method, achieves better efficiency in term of speed and robustness against illumination changes.

Masoud Samadi, Mohd Fauzi Othman
Ten-LoPP: Tensor Locality Preserving Projections Approach for Moving Object Detection and Tracking

In recent years, automatic moving object detection and tracking is a challenging task for many computer vision applications such as video surveillance, traffic monitoring and activity analysis. In this regard, many methods have been proposed based on different approaches. Despite of its importance, moving object detection and tracking in complex environments is still far from being completely solved for low resolution videos, foggy videos, and also Infrared video sequences. A novel scheme for Moving Object detection based on Tensor Locality Preserving Projections (Ten-LoPP) approach is proposed. Consequently, a Moving Object is tracked based on the centroid and area of a detected object. Numbers of experiments are conducted for indoor and outdoor video sequences of standard PETS, OTCBVS, Videoweb Activities datasets and also our own collected video sequences comprising partial night vision video sequences. Results obtained are satisfactory and competent. Comparative study is performed with existing well known traditional subspace learning methods.

M. T. Gopala krishna, M. Ravishankar, D. R Rameshbabu
CAMSHIFT-Based Algorithm for Multiple Object Tracking

This paper presents a technique for object tracking by using CAMSHIFT algorithm that tracks an object based on color. We aim to improve the CAMSHIFT algorithm by adding a multiple targets tracking function [1].When one object is selected as a template, then it will search objects that have the same hue value and shape by shape recognition. So,the inputs of the algorithm are hue values and shape of the object. When all objects are absent in the frame, the algorithm will search whole frame to find most similar-looking objects and track them. The important task of the object tracking is to separate a target from background or frame that in some cases where the noise is present in the tracking frame. Then object identification method was added to the algorithm for filtering the noise and counting numbers of objects to make decide how many targets track.

Sorn Sooksatra, Toshiaki Kondo
Backmatter
Metadata
Title
The 9th International Conference on Computing and InformationTechnology (IC2IT2013)
Editors
Phayung Meesad
Herwig Unger
Sirapat Boonkrong
Copyright Year
2013
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-37371-8
Print ISBN
978-3-642-37370-1
DOI
https://doi.org/10.1007/978-3-642-37371-8

Premium Partner