Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 4th International Conference on Soft Computing, Intelligent Systems, and Information Technology, ICSIIT 2015, held in Bali, Indonesia, in March 2015. The 34 revised full papers presented together with 19 short papers, one keynote and 2 invited talks were carefully reviewed and selected from 92 submissions. The papers cover a wide range of topics related to intelligence in the era of Big Data, such as fuzzy logic and control system; genetic algorithm and heuristic approaches; artificial intelligence and machine learning; similarity-based models; classification and clustering techniques; intelligent data processing; feature extraction; image recognition; visualization techniques; intelligent network; cloud and parallel computing; strategic planning; intelligent applications; and intelligent systems for enterprise, government and society.



On the Relation of Probability, Fuzziness, Rough and Evidence Theory

Since the appearance of the first article on fuzzy sets proposed by Zadeh in 1965, the relationship between probability and fuzziness in representing uncertainty has been an object of debate among many people. The question is whether probability theory by itself is sufficient for dealing with uncertainty. This paper again discusses the question to simply understand the relationship between probability and fuzziness using the process of perception. It is obviously seen that probability and fuzziness work in different areas of uncertainty. By fuzzy set, an ill-defined event, called fuzzy event, can be described in the presence of probability theory providing

probability of fuzzy event

in which fuzzy event might be regarded as a generalization of crisp event. Similarly, by rough set theory, a rough event is proposed representing two approximate events, namely lower and upper approximate events, in the presence of probability theory providing

probability of

rough event

. Finally, the paper shows and discusses relation among belief-plausibility measures (evidence theory), lower-upper approximate probability (probability of rough events), classical probability measures, probability of fuzzy events and probability of generalized fuzzy-rough events.

Rolly Intan

Fuzzy Logic and Control System


A Study of Laundry Tidiness: Laundry State Determination Using Video and 3D Sensors

In recent years, housework automation has become popular with the rise of robot technology. However, tidying laundry is still manual. Automation by machine is difficult, because clothing is an irregular complex shape. We thought it can handle the laundry by combining depth information and color image. The purpose of this study is to develop a laundry state determination system. This study define state of laundry by dividing into 4 states, and develop laundry state determination system using RGB image and 3D information. The results of experiment of the proposed method suggest that the system was possible to accurately determination the state of the laundry by using depth information and RGB camera image with Kinect.

Hirose Daiki, Miyoshi Tsutomu, Maiya Kazuki

Direction Control System on a Carrier Robot Using Fuzzy Logic Controller

In an autonomous mobile robot system the ability to control the robot manually is needed. For that reason a mechatronics system and control algorithm on carrier robot are designed and realized using Fuzzy Logic Controller. The carrier robot system which is designed consists of the robot mechatronics unit and the control center unit. The commands which are sent from the control center unit via local network received by the embedded system. These commands are forwarded to the microcontroller and translated into carrier robot’s maneuver. An error correction algorithms using fuzzy logic controller is applied to regulate the actuator’s speed. This fuzzy logic controller algorithm is implemented on embedded system which has a limitation on computational resources. The fuzzy controller gets its input from a rotary encoder and a magnetometer installed on the robot. The fuzzy logic controller algorithm using direction error protection has been able to detect and correct the direction error which the error value exceeds the predetermined threshold value (± 3 ° and ± 15 °). The carrier robot system has been able to run straight as far as 15 m with average deviation value of 22.2 cm. This fuzzy logic controller algorithm is able to give the response in the form of speed compensation value to maintain the direction of the carrier robot.

Kevin Ananta Kurniawan, Darmawan Utomo, Saptadi Nugroho

Multidimensional Fuzzy Association Rules for Developing Decision Support System at Petra Christian University

Academic records of student candidates and students of Petra Christian University (PCU) which have been stored so far have not been used to generate information. PCU’s top-level management needs a way to generate information from the records. The generated information is expected to support the decision-making process of top-level management.

Before starting the application development, analysis and design of the student academic records and the needs of top-level management are done. The design stage produces a number of modeling that will be used to create the application.

The final result of the development is an application that can generate information using multidimensional fuzzy association rules.

Yulia, Siget Wibisono, Rolly Intan

Genetic Algorithm and Heuristic Approaches


Genetic Algorithm for Scheduling Courses

In the university, college students must be register for their classes. But there still many college student that was confused on how to make a good classes schedule for themselves. Mainly because of many variables and considerations to be made, for examples, they have to consider how hard the classes they are going to take, and also, they still have to consider their exam schedules and also their availability time as well. Genetic Algorithm is one of many methods that can be used to create a schedule. This method determines the best schedule using fitness cost calculation which can compare the quality of one schedule against the other. Then, using crossover, mutation, and elitism selections, we can determine better schedules. Based on the result of the survey held before, 70% of the respondents gave point 4 and 30% of the respondents gave point 5 out of 5 for the quality of the schedule made using this applications.

Gregorius Satia Budhi, Kartika Gunadi, Denny Alexander Wibowo

Optimization of Auto Equip Function in Role-Playing Game Based on Standard Deviation of Character’s Stats Using Genetic Algorithm

Genetic algorithm is a well-known optimization solution for an unknown, complex case that cannot be solved using conventional methods.

In Role-Playing Games (RPG), usually the main features are character’s stats and equip items. Character has stats, namely strength, defense, speed, agility, life. Also, equip items that can boost character’s stats. These items retrieved randomly when an enemy dead.

A problem arise when the player have so many items that we cannot choose the best. Latest items doesn’t always mean best, because usually in RPGs, items don’t always boost all stats equally, but often it reduces certain stat while increasing the other.

Based on this, a function is built in this research, to auto equip all items, based on the standard deviation of character’s stats after equipping. The genetic algorithm will evaluate the best combination of gloves, armors and shoes. This algorithm involves the process of evaluating initial population (items combination), selection, crossover, mutation, elitism, creating new population. The algorithm stops when the best fitness is getting stable in successive 3 generations.

After the auto equip process, the character is getting significantly stronger compared to using default equip items, measured by the remaining life after fighting with several enemies.

Kristo Radion Purba

The Design of Net Energy Balance Optimization Model for Crude Palm Oil Production

Net energy balance (NEB) is the second important indicator following green house gases in developing a sustainable biodiesel industry. The extent of the production chain and various ways to reduce the use of fossil fuels, increase the complexity of finding an optimal NEB value of the industry. The main objective of this study was to design an NEB optimization model, which was supported by genetic algorithm (GA). The model was applied in a crude palm oil (CPO) industry that produces raw material for biodiesel which is located in North Sumatra province. The model was solved by using an optimization computer software package. The results showed that the NEB value was better than the previous one. The model was also able to provide biomass usage composition to achieve the optimal NEB value, and the unit processes that need to be improved.

Jaizuluddin Mahmud, Marimin, Erliza Hambali, Yandra Arkeman, Agus R. Hoetman

ACO-LS Algorithm for Solving No-wait Flow Shop Scheduling Problem

In this paper, we propose a metaheuristic approach for the no-wait flow shop scheduling problem with respect to the makespan criterion. In the literature, this problem is known NP-hard type. In the literature, several algorithms have been proposed to solve this problem. We propose a hybridization of ant colony optimization (ACO) algorithm with local search (LS) in order to solve this scheduling problem, and then we call this as ACO-LS algorithm. This local search technique contributes to improve the quality of the resulting solutions. In addition, the mechanism of insert-remove technique is developed to help the searching of solution escape from the local optimum. The proposed algorithm is tested with the 31 well-known flow shop benchmark instance. The computational results based on well-known benchmarks and statistical performance comparisons are also reported. It is shown that the proposed ACO-LS algorithm is more effective than hybrid differential evolution (HDE) algorithm [Qian B.,, Computer & Industrial Engineering, 2009].

Ong Andre Wahyu Riyanto, Budi Santosa

A New Ant-Based Approach for Optimal Service Selection with E2E QoS Constraints

In this paper, we study the dynamic service composition becomes a decision problem on which component services should be selected under the E2E QoS constraints. This problem is modeled is a nonlinear optimization problem under several constraints. We have two approaches: local selection is an efficient strategy with polynomial time complexity but can not handle global constraints and the traditional global selection approach based on integer linear programming can handle global constraints, but its poor performance renders it inappropriate for practical applications with dynamic requirements. To overcome this disadvantage,, we proposed a novel Min-Max Ant System algorithm with the utilities heuristic information to search the global approximate optimal. The experiment results show that our algorithm is efficiency and feasibility more than the recently proposed related algorithms.

Dac-Nhuong Le, Gia Nhu Nguyen

Artificial Intelligence and Machine Learning


Implementation Discrete Cosine Transform and Radial Basis Function Neural Network in Facial Image Recognition

Facial image recognition has been widely used and implemented in many aspects of life such as, investigation area or security purposes. However, research in this area is still been done. Source images of this paper are taken from image library provided in description of the Collection of Facial Images, 2008. This paper explains how 35 faces in JPG format with dimension 180 x 180 pixels can be represented by only 3 x 3 DCT coefficients and can be recognized fully 100% by Radial Basis Function Network.

Marprin H. Muchri, Samuel Lukas, David Habsara Hareva

Implementation of Artificial Intelligence with 3 Different Characters of AI Player on “Monopoly Deal” Computer Game

Monopoly Deal is a card game that derived directly from the traditional Monopoly Game. The game consists of a set of cards as a game device. This game aims to collect three different sets of property cards. This research has been conducted to explore the potential for further developing this simulation game, by implementing rules and strategies of Monopoly Deal. BFS method was used as the algorithm of the game. The character of players in the game is divided into three kinds, namely: Aggressive, Defensive, and Normal. Each character would make a different influence on the provision of assets and on the selection of card or money being played. The implementation of the game consists of characters, check assets system, and the computer-based card game itself with AI intelligence player. The testing results show that the combination of aggressive character and logical asset as the highest winning rate of 45%.

Irene A. Lazarusli, Samuel Lukas, Patrick Widjaja

Optimizing Instruction for Learning Computer Programming – A Novel Approach

Computer programming is a highly cognitive skill. It requires mastery of many domains. But in reality many learners are not able to cope with the mental demands required in learning programming. Thus it leads to rote learning and memorization. There are many reasons for this situation. However one of the main reasons is the nature of the novice learners who experience high cognitive load while learning programming. Given the fact that the novice learners lack well defined schema and the limitation of the working memory, the students could not assimilate the knowledge required for learning. Many types of learning supports like visualization is provide to reduce the cognitive load. It is expected that the visualization will help in reducing the cognitive load by expanding the working memory. However the effect of visualization in learning is not clearly tangible. There are two common methods used to measure the cognitive load namely physiological and non physiological measures .It is found based on our prior studies that non physiological measure are more appropriate for measuring cognitive load in a class room situation. The non physiological measures include NASA TLX rating scale. It is also observed from our prior studies that there is a variation in learning performance using same visualization support among the students in a homogenous group. This variation is attributed to the level of Long Term Memory which includes prior mathematical, prior language skills, demographics and gender. This paper will propose a framework to optimize the instruction to learners based on their background profile and the extent of the long term memory and will employ neural network to optimize the instruction by suggesting the best visualization tool for each learner. The paper also validates the performance of the proposed tool by a study with the learners to evaluate the accuracy of the tool.

Muhammed Yousoof, Mohd Sapiyan

Sequential Pattern Mining Application to Support Customer Care “X” Clinic

The X clinic that was one of the pioneers in the aesthetics clinic in Indonesia, had much experienced manpower. In a supplied manner this experience, the doctors and the nurse could to the X clinic serve consultations and could give the suggestion or the recommendation to the customer for the following maintenance. This matter gave comfort for the customer to take the decision. However, not all the customers had time that was enough to consult with the doctor. With used Free Span, one of the algorithms in the method sequential pattern mining was expected to be able to satisfy the requirement for the clinic customer of X. The use of sequential pattern mining in this recommendation system could help the doctor in increasing the recommendation, and helping the customer in taking the decision. This algorithm used the historic data the maintenance of the available customer. Results that were given in this program took the form of the pattern that in accordance with the available situation to the clinic of X. The result of the recommended selected based on existing customer categories, namely gender, priority customers in the clinic, and age range. Expected with the available category to be able to give the recommendation that agreed with the customer’s available criterion.

Alexander Setiawan, Adi Wibowo, Samuel Kurniawan

Similarity-Based Models


The Comparation of Distance-Based Similarity Measure to Detection of Plagiarism in Indonesian Text

The accesible loose information through the Internet leads to plagiarism activities use the copy-paste-modify practice is growing rapidly. There have been so many methods, algorithm, and even softwares that developed till this day to avoid and detect the plagiarism which can be used broadly unlimited on a certain subject. Research about detection of plagiarism in Indonesian Language develop day by day, although not significant as English Language. This paper proposes several models of distance-based similarity measure which could be used to assess the similarity in Indonesian text, such as Dice’s similarity coefficient, Cosine similarity, and Jaccard coefficient. It implemented together with Rabin-Karp algorithm that common used to detect plagiarism in Indonesian Language. The analysis technique of plagiarism is fingerprint analysis to create fingerprint document according to n-gram value that has been determined, then the similarity value will be counted according to the same number of fingerprint between texts. Small data text about Information System tested in this case and it divided into four kinds of text document with some modified. First document is original text, second is 50% of original text adding with 50% of another text, third 50% original text modified using sinonym and paraphase, fourth some position of text in original text changed. From the experimental result, cosine similarityshow better performance in generating value accuracy compared to the dice coefficient and Jaccard coefficient. This model is expected to be used as an alternative type of statistical algorithms that implement the n-grams in the process especially to detect plagiarism in Indonesian text.

Tari Mardiana, Teguh Bharata Adji, Indriana Hidayah

Document Searching Engine Using Term Similarity Vector Space Model on English and Indonesian Document

In line with technology development, the number of digital documents increase significantly, this will make process to the search a particular documents experience a little problem. Therefore, the role of search engines is become inevitable.

Usually, search engines conduct a searching process simply by looking at the similarities between keywords (that inputed by user) and terms in a document. In this research, we try to implement Term Similarity Vector Space Model (TSVSM), a method that also saw the relationship between the terms in the document. The relationship between terms in a document is calculated based on the frequency of occurrence in a paragraph. So this will make possible to search documents that do not contain the exact keywords that inputed, but have terms that related to those keywords.

We also try to implement TSVSM to English language documents from CiteseerX journal collection [1]. In this research we also want try it to Indonesian language documents from journal collection on Petra Christian University Research Center (both in pdf format, with total 14.000 documents). This application was built using Microsoft Visual Basic.Net 2005 and PHP.

Based on testing, this method can establish relationships between related terms that can find documents that do not contain keywords but contains terms that relate to the keyword. Time that needed to search document in Indonesian language journal is relative longer than in English language journal.

Andreas Handojo, Adi Wibowo, Yovita Ria

Knowledge Representation for Image Feature Extraction

In computer vision, the feature(s) of an image can be extracted as information using deep learning approach. This type of information can be used for further processing, for example to establish a visual semantic, which is a sentence that gives a description about the image. Usually this type of information is stored in a database point of view, which explains the relation between image feature and image description. This research proposes knowledge representation point of view to store the information gathered from image feature extraction, which in return, some new benefits can be obtained by using this approach. Two benefits that can be delivered by using knowledge representation instead of database point of view are integration capability with another source of information from related knowledge-based system and possibility to produce a high-level specific knowledge.

Nyoman Karna, Iping Suwardi, Nur Maulidevi

Using Semantic Similarity for Identifying Relevant Page Numbers for Indexed Term of Textual Book

Back-of-book index page is one of navigation tools for reader. It helps reader to immediately jump to a page that contains relevant information regarding a specific term. It helps reader to retrieve information about specific topics in mind without having to read the complete book. Indexed terms are usually determined by author based on one’s subjective preference on what indications should be used to decide whether a term should be indexed and what pages are relevant. Therefore, indexing a book inherits subjectivity of author side. The book size is proportional to the indexing effort and consistency. This leads to the fact that page numbers are not always referred to relevant pages. This paper proposes an approach to identify relevancy of a page that contains an indexed term. This approach measures the semantic relation between indexed term with the respective sentence in the page. To measure the semantic relation, the approach utilizes semantic distance algorithm that based on Wordnet thesaurus. We measure the reliability of our system by measuring its degree of agreement with the book indexer using kappa statistics. The experimental result shows that the proposed approach are considered as good as the domain expert, given average kappa value 0.6034.

Daniel Siahaan, Sherly Christina

Classification and Clustering Techniques


The Data Analysis of Stock Market Using a Frequency Integrated Spherical Hidden Markov Self Organizing Map

In stock market, prediction of the stock price fluctuation has been important for the investors. However, it is hard for the beginner investors to predict the stock price changes due to the difficulty of estimating a company’s state makes. To estimate company’s state, we propose the suitable model using Spherical-Self Organizing Map that is integrated frequency vector and Hidden Markov Model to estimate hidden state from the time series data. On this paper, the power company stock price movements are used as the time series data, and we also show a result using improved the Self Organizing Map.

Gen Niina, Tatsuya Chuuto, Hiroshi Dozono, Kazuhiro Muramatsu

Attribute Selection Based on Information Gain for Automatic Grouping Student System

Cooperative learning is an approach of learning process together in small groups to solve problems together. Cooperative learning can enhance students’ ability higher than individual learning. One of the key that can affect the success of cooperative learning is formation. Heterogeneity in cooperative learning can improve cognitive performance. The problem is hard and need long time to determine students into an appropriate group. A student has many attributes that defined their characteristics from academic factor and non - academic factor, such as motivation in learning, self-interest, learning styles, friends, gender, age, educational background of parents, and other explanation of the uniqueness. The purpose of this study is to determine what is the most influential attribute in grouping process by calculate the information gain of each attributes. Then, we can reduce some attributes. The result of the experiment that the most influential and relevant attribute in the process of formating a group is learning style.

Oktariani Nurul Pratiwi, Budi Rahardjo, Suhono Harso Supangkat

Data Clustering through Particle Swarm Optimization Driven Self-Organizing Maps

The Self-Organizing Map (SOM) is a well-known method for unsupervised learning. It can project high-dimensional data onto a low-dimensional topology map which makes it an efficient tool for visualization and dimensionality reduction. The Particle Swarm Optimization (PSO) is a swarm-intelligence meta-heuristic optimization algorithm based on the feeding behavior of a swarm of birds. In this paper, we have combined these two diverse approaches to form a PSO-SOM which is applied to clustering problems. The advantage of this method is the reduction in computational complexity and increase in clustering accuracy as demonstrated by the experimental results.

Tad Gonsalves, Yasuaki Nishimoto

Intelligent Data Processing


A Search Engine Development Utilizing Unsupervised Learning Approach

This article reports a software development of a generic search engine utilizing an unsupervised learning approach. This learning approach has become apparently important due to the growth rate of data which has increased tremendously and challenge our capacity to write software algorithm and implementation around it. This was advocated as a mean to understand better the flow of algorithm in an uncontrolled environment setting. It uses the Depth-First-Search (DFS) algorithm retrieval strategy to retrieve pages with topical searching. Subsequently, an inverted indexing technique is applied to store mapping from contents to its location in a database. Subsequently, these techniques require proper approach to avoid flooding of irrelevant links which can constitute a poor design and constructed search engine to crash. The main idea of this research is to learn the concept of how to crawl, index, search and rank the output accordingly in an uncontrolled environment. This is a contrast as compared to a supervised learning conditions which could lead to information less overloading.

Mohd Noah Abdul Rahman, Afzaal H. Seyal, Mohd Saiful Omar, Siti Aminah Maidin

Handling Uncertainty in Ontology Construction Based on Bayesian Approaches: A Comparative Study

Ontology is widely used to represent knowledge in many software applications. By default, ontology languages ​​such as OWL and RDF is built on discrete logic, so that it can not handle uncertain information about a domain. Various approaches have been made ​​to represent uncertainty in ontology, one of which with a Bayesian approach. Currently, there are four published approaches: BayesOWL or OntoBayes, Multi-Entity Bayesian Networks (MEBN), Probabilistic OWL (PR-OWL), and Dempster-Shafer Theory. This paper provides a comparative study on those approaches based on complexity, accuracy, ease of implementation, reasoning, and tools support. The study concluded that BayesOWL is the most recommended approach to handle uncertainty in ontology construction among others.

Foni Agus Setiawan, Wahyu Catur Wibowo, Novita Br Ginting

Applicability of Cyclomatic Complexity on WSDL

Complexity metrics are useful for predicting the quality of software systems because they quantify the quality attributes. Web services, a new kind of software system has been providing a common standard mechanism for interoperable integration of disparate systems and gaining a great deal of acceptance by different types of parties that are connected to the internet for different purposes. In this respect, quality of the web-services should be quantified for easy maintenance and quality of services. Further, the Web Services Description Language (WSDL) forms the basis for Web Services. In this paper, we are evaluating the quality of the WSDL documents by applying the Cyclomatic Complexity metric, a well known and effective complexity metric, which has not been used to evaluate the quality of WSDL till date.

Sanjay Misra, Luis Fernandez-Sanz, Adewole Adewumi, Broderick Crawford, Ricardo Soto

Feature Extraction


Multiclass Fruit Classification of RGB-D Images Using Color and Texture Feature

Fruit classification under varying pose is still a complicated task due to various properties of numerous types of fruit. In this paper we propose fruit classification method with a novel descriptor as a combination of color and texture feature. Color feature is extracted from segmented fruit image using Color Layout Descriptor, while texture feature is extracted using Edge Histogram Descriptor. Support Vector Machine (SVM) with linear and RBF kernel is used as classifier with 10-fold cross validation. The experimental results demonstrated that our descriptor achieves classification accuracy of over 93.09 % for fruit subcategory and 100 % for fruit category from over 4200 images in varying pose.

Ema Rachmawati, Iping Supriana, Masayu Leylia Khodra

Content-Based Image Retrieval Using Features in Spatial and Frequency Domains

With the rapid increase of image data due to the development of information society, the necessity of image retrieval is increasing day by day. The research result of image retrieval using Local Binary Pattern (LBP) is reported previously, which is a robust feature obtained from spatial domain. It can be considered that we can get further information as the feature of image when extracting the feature from frequency domain. In this paper, we propose a novel image retrieval algorithm to improve retrieval accuracy, which using both features obtained from spatial and frequency domains. 2-dimensional Discrete Cosine Transform (DCT) is used to calculate the feature in frequency domain. Corel database is used for the evaluation of our proposed algorithm. It is demonstrated that image retrieval using combined features can achieve a much higher search success rate compared with that of algorithms using DCT and LBP, respectively. As a result, the precision rates and recall rates of this study was higher than the preceding study. In addition, better results were obtained using the weights.

Kazuhiro Kobayashi, Qiu Chen

Feature Extraction for Java Character Recognition

Feature extraction is very important in the process of character recognition. A good feature of the character will increase the level of accuracy for the character recognition. In this research, the feature extraction experiment of Java characters is done, where those features could be used for Java character recognition later. Before performing the process of feature extraction, segmentation is performed to get each Java character in an image, and followed by skeletonization process. After skeletonization process, feature extraction is done including simple closed curve, straight lines and curve. Several experiments was done using various parameters and Java characters in order to obtain the optimal parameters. The experiment results of simple closed curve and straight line feature extraction are quite good, respectively reached 82.85% and 74.28%. However, the result of the curve detection is still not good, only reached 51.42%

Rudy Adipranata, Liliana, Meiliana Indrawijaya, Gregorius Satia Budhi

Fast Performance Indonesian Automated License Plate Recognition Algorithm Using Interconnected Image Segmentation

A reliable and fast algorithm is needed to implement a License Plate Recognition in a real life vehicle traffic environment. This paper proposed a hybrid algorithm of interconnected image segmentation and contour extraction to locate and identify license plate for a given image. A pre-processed image will be segmented in several small images based on contour detection algorithm. The location of each segmented image’s centre was then plotted into a Euclidean plane to get their relative positions. The distance and position of the segments evaluated in search of interconnected segments. This set of segments then will be fed into a cost function that determines its level of relevancy as a plate number. Set with highest score went through character recognition process to extract the meaning. Proposed work was tested in Indonesian license plate system with a high rate of correctness and average processing speed of 100 millisecond.

Samuel Mahatmaputra Tedjojuwono

Image Recognition


A Study of Laundry Tidiness: Socks Pairing Using Video and 3D Sensors

Recent years, house-robots have become popular with the advances in robot technology, but there is no alternative for the tidiness of laundry by robots. We analyzed the functions required to tidy up laundry by robots, and thought that the pairing of socks is important. The purpose of this study is to develop a system for pairing socks automatically by robots. This paper provides the recognition part of this system that includes deforming detection and the similarity determination of socks by the use of image processing and 3D information. The results show that the system could detect deforming position and gripping point correctly, and could classify socks of same pattern into same group. The study concludes that the method to revise the deformation using multiple sensors is effective.

Maiya Kazuki, Miyoshi Tsutomu, Hirose Daiki

Design and Implementation of Skeletonization

The development of image processing technology is well developed today, such as object recognition in an images. Algorithm to obtain accurate results in object recognition continues to be developed. One example is the handwriting recognition application. This application is usually used to archive records or documents from physical form such as a notebook or a letter in the form of files digital. One of the initial process in image processing is image segmentation, and one of the methods used is skeletonizaton.

This Skeletonization uses Discrete Local Symmetry. The process begins by setting the active contour of the image. From the active contour triangulation process is done. And from the process the symmetry points are defined. Skeletonization process is then performed using point symmetry obtained.

The results show that the size of an image greatly affect the outcome of the process of skeletonization using Discrete Local Symmetry. Discrete Local Symmetry methods suitable for ribbon-like shaped objects.

Kartika Gunadi, Liliana, Gideon Simon

A Computer-Aided Diagnosis System for Vitiligo Assessment: A Segmentation Algorithm

Vitiligo is a condition where depigmentation occurs in parts of the skin. Although vitiligo affects only 0.5% to 1% of the world’s population, vitiligo may impact patients’ quality of life greatly. The success of vitiligo treatment is influenced by the accuracy of vitiligo assessment method used for monitoring the treatment progress. A popular method for assessing vitiligo is the Vitiligo Area Scoring Index (VASI), where VASI score is calculated based on the visually observed degree of depigmentation in the skin and the area of depigmented skin. This approach has two limitations. First, it has inter-examiner variability, and second, its accuracy can be very low. While the first limitation can be addressed through training, the second limitation is more difficult to address. With visual observation, positive but small progress may be unnoticed since it usually takes some time to notice skin color changes manually. To overcome this limitation, we propose in this paper a vitiligo lesion segmentation algorithm as a part of a computer-aided diagnosis system for vitiligo assessment. The algorithm reads color skin image as input and makes use of the Fuzzy C-Means clustering algorithm and YCbCr and RGB color spaces to separate pigmented and depigmented skin regions in the image. The corresponding degree of depigmentation is then estimated. The algorithm was evaluated on low resolution Internet images of skin with vitiligo. The results are encouraging, indicating that the proposed algorithm has a potential for use in clinical and teledermatology applications.

Arfika Nurhudatiana

Face Recognition for Additional Security at Parking Place

As a country that has a lot of car users, the parking management and its security are quite complicated issues. This paper is described the use of facial recognition system using Eigenface method, to identify the drivers of four-wheeled vehicles. the face of the driver of the car was taken while taking a parking ticket and will be recorded as comparative data source that will be used to identify the driver when paying the parking fee. The results of this research tests performed are able to recognize when the driver out, but the results will be better if the driver’s face image capture using multiple cameras so Eigenface face recognition system has enough resources to recognize the driver’s face. This system is expected to reduce the number of car thefts committed in parking places.

Semuil Tjiharjadi, William Setiadarma

Optic Disc Segmentation Based on Red Channel Retinal Fundus Images

Glaucoma is a one of the serious diseases that occurs in retina. Early detection of glaucoma can prevent patients from blindness. One of the techniques to support the diagnosis of glaucoma is developed through the detection and segmentation of optic disc area. Optic disc area is also useful in assisting automated detection of abnormalities in the case of diabetic retinopathy. In this work, extracted red channel of colour retinal fundus images is used. Median filter is used to reduce noises in the red channel image. Segmentation of optic disc is conducted based on morphological operation. DRISHTI-GS dataset is used in this research works. Results indicate that the proposed method can achieve an accuracy of 94.546% in segmenting the optic disc.

K. Z. Widhia Oktoeberza, Hanung Adi Nugroho, Teguh Bharata Adji

Visualization Techniques


Multimedia Design for Learning Media of Majapahit

Majapahit was one of the last major empires of the region and is considered to be one of the greatest and most powerful empires in the history of Indonesia and Southeast Asia. However, learning history is became unpopular to young generation. Interactive media can help society to learn history enjoyable. This interactive multimedia will allow the reader to interact with combination of text, image, sound, and animations of Majapahit. There are three aspects of disccusion which are economics, politics, also arts and cultures. Each aspects will provide information related with Majapahit. From the results of questionnaire can be conclude that interactive multimedia form is more interesting than history’s books which full of text. The audience is satisfy with the utility and interaction form. Their knowledge also improve after they use the application.

Silvia Rostianingsih, Michael Chang, Liliana

Adding a Transparent Object on Image

Nowadays, image manipulation, which is related to matting and compositing process, is commonly used. Processed object may vary, however matters may arise when the object is transparent. Not many research has been conducted about the issue due to the difficulties in extracting transparency value and object refraction. In this research an application to execute matting process of a transparent object and compositing it into a new image is developed.

Matting process will be executed using Grabcut method, followed by calculation of transparency value of an object using alpha matting. The transparency value is then used in compositing an object with a new background, while refraction will be executed using refraction calculation of ray tracing. Adding a transparent object into new image (new background) process, first user should determine the eye’s position and distant between new image and transparent object. Then, assume the additional transparent object as a screen to trace rays from eye. Each ray will be bent and hit the new image. The color of new image will add into the transparent object area.

Experiments result showed that extracted transparency value was affected by lighting, refraction of the old background, which unknown and object’s refraction index. Refraction result looks natural compared with the actual condition. Moreover, eye position can be configured by the user to obtain results as desired.

Liliana, Meliana Luwuk, Djoni Haryadi Setiabudi

3D-Building Reconstruction Approach Using Semi-global Matching Classified

The complexity problems of a city makes the understanding of three-dimensional spatial structure of urban areas becoming very important, for example in the field of urban planning, telecommunications, updating maps and GIS information, Intelligent Transportation System, and monitoring of urban growth that occurs very rapidly, especially in cities of the developing countries. Techniques for obtaining Digital Surface Model (DSM) of a region continues to evolve, the continued development of technology with high-resolution satellite imagery provides an alternative data and information that is more promising, especially in those countries are not available aerial photography and LIDAR technology. DSM is not sufficient to meet the need for data and information about the city, to generate data that can be understood and utilized furthermore, still needs other processes to be done such as the objects separation process, building with other building, roads, rivers, plants and etc. This paper proposes a model for the 3D reconstruction of buildings automatically using satellite imagery data. In our approach, we propose to use a basic method Semi-Global Matching (SGM) with performed the detection and classification process. This approach aims to increase the accuracy of height estimation, faster computation time and low computational load.

Iqbal Rahmadhian Pamungkas, Iping Supriana Suwardi

Intelligent Network


Spanning Tree Protocol Simulation Based on Software Defined Network Using Mininet Emulator

Software Defined Networking (SDN) is a new networking paradigm [1] that separates the control and forwarding planes in a network. This migration of control, formerly tightly bound in individual network devices, into accessible computing devices enables the underlying infrastructure to be abstracted for applications and network services, which can treat the network as a logical or virtual entity. There are three methods for implementing SDN based network architecture, which uses Mininet emulator, Net-FPGA, and OpenFlow-based S/W switch [2]. In this research, simulation of Spanning Tree Protocol (STP) based SDN using mininet emulator. SDN is performed by OpenvSwitch (OVS) as a forwarding function, Ryu as OpenFlow Controller, and Mininet that installed on Raspberry-Pi. Then see the effect of the use of STP at Ryu Controller on network performance.The results of this research show that the simulation of SDN with OVS and Ryu controller can successfully runs STP function and STP on Ryu controller prevents broadcast storm.

Indrarini Dyah Irawati, Mohammad Nuruzzamanirridha

Varnish Web Cache Application Evaluation

Websites today have static and dynamic contents that concern many people about the performance. People will leave or not returning the website because of loading time that is taking too long. One way to speed up a website access is by utilising web applications using Varnish cache. Varnish will store static content from a website in the memory. Static data is data that changes infrequently or rarely updated. A front end for web administrators was built for Varnish application so that the configuration can be done easily. The configuration that will be simplified through the front end of this Varnish application is detecting SQL injection and Cross Site Scripting (XSS), block files and folders, HTTP header manipulation, detects the original file whether it is deleted or blocked and error handling configuration. The front end application is implemented on Linux platform. Varnish application was tested using ApacheBench where the number of requests from the client and the response time become the main parameters of the test.

Justinus Andjarwirawan, Ibnu Gunawan, Eko Bayu Kusumo

DACK-XOR: An Opportunistic Network Coding Scheme to Address Intra-flow Contention over Ad Hoc Networks

Network coding is a novel technology that exploits the intrinsic broadcast nature of wireless media, to significantly reduce the number of transmissions (hops). Our primary focus is to minimize intra-flow contention by exor-ing TCP-DATA and TCP-ACK packets belonging to the same TCP flow, ensuring that the packets are never delayed in the process. Network coding always comes with the overhead of intermediate nodes having to buffer packets so as to successfully perform decoding. This requires the nodes to maintain large buffers. We also propose a new technique in which we retain only the last delivered packet in the buffer. However, at times when the same node gets access to the medium repeatedly, keeping only the last sent packet in the buffer will not suffice. Hence we introduce a rate based Cross-layer Transport Solution (CLTSP) that inserts a delay interval (called out -of-interference delay) between two consecutive packets transmitted. This reduces intra-flow contentions by leaps and bounds. Our unique combination of using a Network Coding technique along with a suitable TCP variant improves the throughput gains with significant improvement in handling medium contention by reducing the number of transmissions.

Radha Ranganathan, Kathiravan Kannan, P. Aarthi, S. LakshmiPriya

Network Security Situation Prediction: A Review and Discussion

The rapid development of information technology exposes peoples life and work to the network. While people are enjoying in sharing their resources in the convenient condition, network security issues have emerged. Instead of considering security of single device in the network, researchers have shown an increased interest to grasp the overall network situation as a big picture in order to create situation awareness which consists of event detection, situation assessment and situation prediction. As the highest level in situation awareness, Network Security Situation Prediction makes quantitative prediction of incoming network security posture based on historical and present security situation information. The purpose is to provide an informational reference to network managers for helping them in formulating and implementing timely preventive measures before the network is under attack. In this paper, the authors group the existing network security situation prediction mechanisms into three major categories and review each model in the aspect of its strengths and limitations. The authors conclude that adaptive Grey Verhulst is more suitable to be used in predicting incoming network security situation.

Yu-Beng Leau, Selvakumar Manickam

Cloud and Parallel Computing


Lightweight Virtualization in Cloud Computing for Research

With the advancement of information technology and the wide adoption of the Internet, cloud computing has become one of the choices for researchers to develop their applications. Cloud computing has many advantages, particularly the ability to allocate on-demand resources without the need to build a specialized infrastructure or perform major maintenance. However, one of the problems faced by the researcher is the availability of computer tools to perform research. Docker is a lightweight virtualization for developers that can be used to build, ship, and run a range of distributed applications. This paper describes how Docker is deployed within a platform for bioinformatics computing.

Muhamad Fitra Kacamarga, Bens Pardamean, Hari Wijaya

A Cloud-Based Retail Management System

Retail management systems have been deployed extensively as web applications and stand-alone systems. However, in order to maximize return on investment while also improving on retail business efficiency and performance, it is imperative to explore newer technologies that can be leveraged. Cloud computing shows great potential in this regard; and so it is our aim in this paper to develop a cloud-based retail management system. We realize this by first designing the framework of the system and then implementing it.

Adewole Adewumi, Stanley Ogbuchi, Sanjay Misra

Towards a Cloud-Based Data Storage Medium for E-learning Systems in Developing Countries

The focus of this study is to propose a cloud computing data storage model for E-Learning systems in developing countries. Cloud computing is an information technology platform that refers to services which provide data storage, collaboration and software execution hosting services via the Internet. Cloud computing is a technology trend that has a significant impact on the teaching and learning environment. The idea behind this research work therefore is to enhance the storage capacity and utilize resources of E-learning system for universities in developing countries. This platform incorporates cloud data storage medium to accommodate/store all educational content/files on the E-learning system.

Temitope Olokunde, Sanjay Misra

Fast and Efficient Parallel Computations Using a Cluster of Workstations to Simulate Flood Flows

A strategy is proposed for fast and efficient parallel computations using a cluster of workstations to simulate flood flows. We implement ANUGA software to conduct flood simulations. ANUGA solves the two-dimensional shallow water equations using a finite volume method. ANUGA can be run in either sequential or parallel. We focus our work on parallel computations. Our computation scenarios are implemented on a cluster of workstations. Parallel computation results are assessed based on execution time and efficiency. The strategy that we propose gives a significant advantage when we want to get fast and efficient computations using a cluster of workstations.

Sudi Mungkasi, J. B. Budi Darmawan

Strategic Planning


A Simulation Model for Strategic Planning in Asset Management of Electricity Distribution Network

Asset management of electricity distribution network is required in order to improve the network reliability so as to reduce electricity energy distribution losses. Due to strategic asset management requires long-term predictions; it would require a simulation model. Simulation of asset management is an approach to predict the consequences of long-term financing on maintenance and renewal strategies in electrical energy distribution networks. In this research, the simulation method used is System Dynamics based on consideration that this method enables us to consider internal and external influenced factors. To obtain the model parameter, we utilized PLN Pamekasan for the case study. The results showed the reduction of low voltage network assets condition on average in the range 6% per year, the average decline in the transformer condition is approximately 6.6% per year, and the average decline in the condition of medium voltage network assets is approximately 4.4% per year. In general, the average technical losses average of 1,359,981.60 KWH / month or about 16,319,779.24 KWH / year.

Erma Suryani, Rully Agus Hendrawan, Eka Adipraja Philip Faster, Lily Puspa Dewi

Enhancing the Student Engagement in an Introductory Programming: A Holistic Approach in Improving the Student Grade in the Informatics Department of the University of Surabaya

Student engagement has long been known can increase the student performance. However bring this concept to practice may not be as easy as it sounds. Some factors, such as the teacher, the students and their background, the course content, the academic atmosphere, the study culture, etc., influence its implementation. This paper presents the practice that is applied in the Informatics Department of the University of Surabaya to enhance the student engagement in the introductory programming course (i.e. Algorithm and Programming). This course is one of the course that is commonly known as difficult and make many Informatics students in the University of Surabaya dropped their study in the Department. The practice to enhance the student engagement in the Informatics Department of the University of Surabaya is designed to fit best with the condition in the Department. As a result, the students’ performance increase two grade level compared to the students’ performance in the previous years.

Budi Hartanto

Business Process Maturity at Agricultural Commodities Company

The agricultural commodities company nowadays strive to keep transforming their business processes in accordance with the fast changing demands to survive the intense global competition. In an attempt to provide stakeholder with an insight business process, this paper investigates how to model business processes with Business Process Modelling Notation and assesses the process maturity using Gartner’s model. This research is based on an in-depth observation at purchasing and production division of agricultural commodities company. It is found the business process model presents the knowledge from business in existing context. The other findings show the maturity level of the people is on phase 1, the maturity level of IT achieves phase 3, and the maturity level of the other four factors including strategic alignment, culture and leadership, governance and methods have already arrived phase 2. These findings will help company to do big planning for improvement in the future.

Lily Puspa Dewi, Adi Wibowo, Andre Leander

Innovation Strategy Services Delivery: An Empirical Case Study of Academic Information Systems in Higher Education Institution

Information Communication and Technology (ICT) gives a lot of contributions in the rapid transformation in global society. The people have to adapt in evolving claims of this trend in order to survive from daily struggles. In this 21


century Indonesians compel themselves to go with this flow and have to keep at par with countries over the world. Hence, higher education institutions must undertake on this fast transformation in serving the stakeholders real time by supporting of ICT as at University ABC in Jakarta, Indonesia. The existing Academic Information Systems (AIS) is not yet fully used for moduls usage and some of them are still idle.This paper is an empirical case study and the method used is discriptive research by using the existing data recorded in the systems then analyzed accordingly. This results study will give sugestions for better service delivery to the institutions.

John Tampil Purba, Rorim Panday

Intelligent Applications


Public Transport Information System Using Android

Traffic jams are getting higher, making people surabaya thinking of switching from private to public transport vehicles. But the problem that arises is the lack of information about public transportation in Surabaya, so it is quite difficult for people who want to use public transport for transportation.

With the development of technology, especially in the smartphone, which is almost used by most people, the idea emerged to develop an application that can provide information related to public transport service based on mobile technology. Android platform chosen for the current smartphones starting from the lowest price to the highest price is dominated by the Android operating system. Applications developed to help people to be able to choose the transport used while traveling from one place to a particular destination with as little effort as possible. The application is able to assist the selection of an appropriate route, either directly or indirectly.

Agustinus Noertjahyana, Gregorius Satia Budhi, Agustinus Darmawan Andilolo

Lecturers and Students Technology Readiness in implementing Services Delivery of Academic Information System in Higher Education Institution: A Case Study

Now, ICT is a part of human needs in every activity, including education. Academic information systems in Indonesia, has already implementing ICT, either partially or as a totally. How well the information system is created, will depend on the readiness of the stakeholders in Higher Education, especially lecturers and students. This study aims to reveal the Technology Readiness (TR) of Lecturers and students in the academic system implementation. This study refered to the TR that developed by Parasuraman and Colby. This research conducted at the XYZ university, located in Jakarta, by taking a sample of 260 lecturers and 251 students as randomly. Descriptive analysis and t-test are used to get some conclusions. The result, lecturers exhibited a significantly higher level of Optimism and innovativeness towards using new technology than Students did. Two other dimensions, there are no significantly differences of Discomfort and Insecurity , between Lecturers and Students did.

Rorim Panday, John Tampil Purba

Tool Support for Cascading Style Sheets’ Complexity Metrics

Tools are the fundamental requirement for acceptability of any metrics programme in the software industry. It is observed that majority of the metrics proposed and are available in the literature lack tool support. This is one of the reasons why they are not widely accepted by the practitioners. In order to improve the acceptability of proposed metrics among software engineers that develop Web applications, there is need to automate the process. In this paper, we have developed a tool for computing metrics for Cascading Style Sheets (CSS) and named it as CSS Analyzer (CSSA). The tool is capable of measuring different metrics, which are the representation of different quality attributes: which include understandability, reliability and maintainability based on some previously proposed metrics. The tool was evaluated by comparing its result on 40 cascading style sheets with results gotten by the manual process of computing the complexities. The results show that the tool computes in far less time when compared to the manual process and is 51.25% accurate.

Adewole Adewumi, Onyeka Emebo, Sanjay Misra, Luis Fernandez

Intelligent Systems for Enterprise, Government and Society


Generic Quantitative Assessment Model for Enterprise Resource Planning (ERP) System

Enterprise resource planning (ERP) system has been proved to be a powerful for managing and integrating enterprise. Thus, the number of ERP implementation is increasing every year. However, some enterprises, especially the ones implementing such large system for the first time, are facing difficulties in choosing the suitable type of ERP system. A lot of ERP system types are available in the market, making enterprises hard to choose the one that is better and more suitable for them. Available approach in ERP selection mostly only cover qualitative analysis. The ones with quantitative support require proper knowledge in technology and statistics, making it hard to be directly used in enterprises. This paper addresses a general framework for assessing ERP system quantitatively, based on its selection criteria and critical success factors (CSFs) in implementation.

Olivia, Kridanto Surendro

The Implementation of Customer Relationship Management: Case Study from the Indonesia Retail Industry

PD.Cemara Sewu is a company engaged in distributor and selling photography product. Nowadays, system of sales and storage of goods still be done manually. So, company needs a system in order to improve good relationship with customers, and storage goods could be more effective and efficient. In this research, the application process begins with the design to the implementation of the system. Designs are made include the manufacture of DFD and ERD. Application of Electronic Customer Relationship Management that have been implemented to make the buying and storage of goods in PD. Cemara Sewu become more organized, well-structured and have relationship with customers. The application of this system has been created with features that have been planned and tailored to the needs of previously so customer have loyalty to company and reports of sales, card stock and forecasting from the application are accurately, because have the same result with manual calculations. Based on the evaluation, 85% of users said that the features in the application made is considered good and in accordance with company needs.

Leo Willyanto Santoso, Yusak Kurniawan, Ibnu Gunawan

The Implementation of Customer Relationship Management and Its Impact on Customer Satisfaction, Case Study on General Trading and Contractor Company

This study concern about the implementation of Customer Relationship Management (CRM) and the impact on customers satisfaction of General Trading and Contractor company from various industrial sector. The target of the company is increasing the profit by handling personally every customer or future customer. However, the fact showed that the manager found difficulties in controlling the performance and progress of his sales. This condition makes the service schedule which is promised to the customer sometimes cannot be fulfilled on time. This problem happens because there is no reminder system to the manager about the service schedule. The implementation of Customer Relationship Management in this company includes: make the assignment for sales in handling the customer or future customer, reminder, online questionnaire, complaint, added wish list, and system to show notification. The result of this study shows that the customer satisfaction can be increased as much as 92%, with the implementation of the following features: system can create marketing campaign, customers can make a complaint online, employee can create reminders to remind the schedule of services and other purposes.

Djoni Haryadi Setiabudi, Vennytha Lengkong, Silvia Rostianingsih

Towards e-Healthcare Deployment in Nigeria: The Open Issues

Information and communication Technology (ICT) has played vital roles in so many disciplines and it is one ofthe driving forces behind globalization. It has closed the gap in communication as well as enhanced prompt decision making thus accomplishing the United Nation Millennium Development Goals Target 18 which expresses the maximization of the values of advancement in ICT through collaboration with private sectors. Healthcare as a discipline has also embraced this dynamic tool to fashion out what is known as e-health targeted towards improving the health system of the people. Consumers are tired of the usual routine of waiting long hours on queue for appointments; struggling with inconvenient scheduling that deprives them of being fully at work or engaging in productive work. The fact still remains that healthcare system in developing countries has not kept pace with other sectors of the society especially in embracing the power of ICT. This paper reviews the open issues facing the adoption of e-healthcare in Nigeria, highlighting issues that were not discussed in the papers reviewed and giving plausible solutions for its effective adoption and expansion. In addition a model was proposed for e-healthcare adoption in Nigeria.

Jumoke Soyemi, Sanjay Misra, Omoregbe Nicholas


Weitere Informationen

Premium Partner