Skip to main content
Top

2019 | Book

Recent Advances in Information and Communication Technology 2018

Proceedings of the 14th International Conference on Computing and Information Technology (IC2IT 2018)

insite
SEARCH

About this book

This book contains the research contributions presented at the 14th International Conference on Computing and Information Technology (IC2IT 2018) organised by King Mongkut’s University of Technology North Bangkok and its partners, and held in the northern Thai city of Chiang Mai in July 2018. Traditionally, IC2IT 2018 provides a forum for exchange on the state of the art and on expected future developments in its field. Correspondingly, this book contains chapters on topics in data mining, machine learning, natural language processing, image processing, networks and security, software engineering and information technology. With them, the editors want to foster inspiring discussions among colleagues, not only during the conference. It is also intended to contribute to a deeper understanding of the underlying problems as needed to solve them in complex environments and, beneficial for this purpose, to encourage interdisciplinary cooperation.

Table of Contents

Frontmatter

Data Mining

Frontmatter
Combining Multiple Features for Product Categorisation by Multiple Kernel Learning

E-commerce provides convenience and flexibility for consumers; for example, they can inquire about the availability of a desired product and get immediate response, hence they can seamlessly search for any desired products. Every day, e-commerce sites are updated with thousands of new images and their associated metadata (textual information), causing a problem of big data. Retail product categorisation involves cross-modal retrieval that shows the path of a category. In this study, we leveraged both image vectors of various aspects and textual metadata as features, then constructed a set of kernels. Multiple Kernel Learning (MKL) proposes to combine these kernels in order to achieve the best prediction accuracy. We compared the Support Vector Machine (SVM) prediction results between using an individual feature kernel and an MKL combined feature kernel to demonstrate the prediction improvement gained by MKL.

Chanawee Chavaltada, Kitsuchart Pasupa, David R. Hardoon
ARMFEG: Association Rule Mining by Frequency-Edge-Graph for Rare Items

Almost every global economy has now entered the digital era. Most stores are trading online and they are highly competitive. Traditional association rule mining is not suitable for online trading because information is dynamic and it needs a fast processing time. Sometimes products on online stores are out of stock or unavailable which needs to be manually addressed. Therefore, this work proposes a new algorithm called Association Rule Mining by Frequency-Edge-Graph (ARMFEG) that can convert transaction data to form a complete virtual graph and store items counting in the adjacency matrix. With the limitation of search space using the top weight which is automatically generated during frequency items generation, ARMFEG is very fast during the rule generation phase and can find association rules in all items from adjacency matrix which solves the rare item problem.

Pramool Suksakaophong, Phayung Meesad, Herwig Unger
Hybrid Filtering Approach for Linked Open Data Exploration Applied to a Scholar Object Linking System

In this paper, a hybrid filtering approach for Linked Open Data (LOD) exploration is proposed. This approach is applied to a scholar object linking system. Our approach includes three techniques: cosine similarity, co-relation and voting by in-links. Cosine similarity and term co-relation techniques are used to compute the similarity and relationship between each pair of resources during graph expansion. Voting by in-links is adapted to identify the importance of resources within in the DBpedia graph. The evaluation study was conducted to assess the effectiveness of our approach. We found that the average precision of our filtering approach was 3 times higher than the exploration without filtering. Unfortunately, the average recall is slightly lower. A result of evaluation shows that our approach can reduce the information overload from LOD consumption.

Siraya Sitthisarn, Nariman Virasith
Sugarcane Yield Grade Prediction Using Random Forest with Forward Feature Selection and Hyper-parameter Tuning

This paper presents a Random Forest (RF) based method for predicting the sugarcane yield grade of a farmer plot. The dataset used in this work is obtained from a set of sugarcane plots around a sugar mill in Thailand. The number of records in the train dataset and the test dataset are 8,765 records and 3,756 records, respectively. We propose a forward feature selection in conjunction with hyper-parameter tuning for training the random forest classifier. The accuracy of our method is 71.88%. We compare the accuracy of our method with two non-machine-learning baselines. The first baseline is to use the actual yield of the last year as the prediction. The second baseline is that the target yield of each plot is manually predicted by human expert. The accuracies of these baselines are 51.52% and 65.50%, respectively. The results on accuracy indicate that our proposed method can be used for aiding the decision making of sugar mill operation planning.

Phusanisa Charoen-Ung, Pradit Mittrapiyanuruk
The Efficacy of Using Virtual Reality for Job Interviews and Its Effects on Mitigating Discrimination

Virtual reality (VR) is an emerging technology that has already found successful application in a variety of different fields, including simulation, training, education, and gaming. While VR technologies have been considered for use in recruitment practices, available research on the topic is limited. In all stages of the recruitment process, social categorization of job applicants based on ethnicity, skin color, and gender, as well as other forms of discrimination are contemporary issues. This study examined the efficacy of using virtual reality technology as part of job interview strategies and evaluated its potential to mitigate personal bias towards job applicants. The use of VR technology as a job interview concept has been shown to have a high rate of acceptance and initial barriers for first-time users are low. The method provides benefits for job applicants in terms of social equity and performance.

Rico A. Beti, Faris Al-Khatib, David M. Cook

Natural Language Processing

Frontmatter
Short Text Topic Model with Word Embeddings and Context Information

Due to the length limitation of short texts, classical topic models based on document-level word co-occurrence information fail to distill semantically coherent topics from short text collections. Word embeddings are trained from large corpus and inherently encoded with general word semantic information, hence they can be supplemental knowledge to guide topic modeling. General Pólya Urn Dirichlet Multinomial Mixture (GPU-DMM) model is the first attempt which leverages word embeddings as external knowledge to enhance topic coherence for short text. However, word embeddings are usually trained on large corpus, the encoded semantic information is not necessarily suitable for training dataset. In this work, we improve the GPU-DMM model by leveraging both context information and word embeddings to distill semantic relatedness of word pairs, which can be further leveraged in model inference process to improve topic coherence. Experimental results of two tasks on two real world short text collections show that our model gains comparable or better performance than GPU-DMM model and other state-of-art short text topic models.

Xianchao Zhang, Ran Feng, Wenxin Liang
Temporal Analysis of Twitter Response and Performance Evaluation of Twitter Channels Using Capacitor Charging Model

As twitter is one of the highly popular social networks, analyzing the responses from users can allow us to study the behavior of users as well as evaluate the popularity of the twitter channels. In this study, we present a novel framework for analyzing twitter temporal responses using capacitor charging model. The proposed model, inspired from electrical circuit analysis, can reveal the temporal characteristic of the responses of each twitter post which can be a better option for measuring the channel popularity than the number of followers. Representing each post as a data point in the feature space, data clustering is used to determine the modal performance of each twitter channel that can reflect the channel’s popularity. The study illustrates the use of the proposed framework in comparison five news twitter channels.

Sirisup Laohakiat, Photchanan Ratanajaipan, Krissada Chalermsook, Leenhapat Navaravong, Rachanee Ungrangsi, Aekavute Sujarae, Krissada Maleewong
On Evolving Text Centroids

Centroid terms are single words that semantically and topically characterise text documents and thus can act as their very compact representation in automatic text processing tasks. In this paper, a novel brain-inspired approach is presented to first simplify the determination of centroid terms and second to generalise the underlying concept at the same time. As the precision of the centroid-based text representation is improved by this means as well, new applications for centroids are derived, too. Experimental results obtained confirm the validity of this new approach.

Herwig Unger, Mario Kubek
Thai Word Safe Segmentation with Bounding Extension for Data Indexing in Search Engine

Word segmentation ambiguity in Thai language affects data indexing process by creating the inverted index relatively to the segmentation results. This phenomenon leads to unreasonable search result. This article proposes Thai word Safe segmentation algorithm using dictionary to solve this problem so that all different terms in an ambiguous part of the sentence are queryable. Next, it shows the bounding extension to improve Safe segmentation performance. It also compares several off-the-shelf implementations of the trie data structure which -we believe- is the best data structure for dictionary-based Thai word segmentation and compares the efficiency of serializable libraries for de-serializing trie in the analyzer’s initial state. Finally, it evaluates the Safe segmentation with several implementations called Safe Analyzer. The experimental results also show that the linked-list Trie and Protostuff library give the outstanding results. The Safe segmentation can definitely solve the ambiguity problem but still it could not solve the misspell within text accurately.

Achariya Klahan, Sukunya Pannoi, Prapeepat Uewichitrapochana, Rungrat Wiangsripanawan
Robust Kurtosis Projection Approach for Mangrove Classification

Mangroves are coastal vegetations that grow at the interface between land and sea. It can be found in tropical and subtropical tidal areas. Mangrove ecosystems have many ecological roles spans from forestry, fisheries, environmental conservation. The Indonesian archipelago is home to a large mangrove population which has enormous ecological value. This paper discusses mangrove land detection in the North Jakarta from Landsat 8 satellite imagery. One of the special characteristics of mangroves that are distinguishing them from another vegetation is their growing location. This characteristic makes mangrove classification using satellite imagery non trivial task. We need an advanced method that can confidently detect the mangrove ecosystem from the satellite images. The objective of this paper is to propose the robust algorithm using projection kurtosis and minimizing vector variance for mangrove land classification. The evaluation classification provides that the proposed algorithm has a good performance.

Dyah E. Herwindiati, Janson Hendryli, Sidik Mulyono

Machine Learning

Frontmatter
Diabetes Classification with Fuzzy Genetic Algorithm

In this research work, we consider the diabetes classification on PIMA Indian dataset with Fuzzy Genetic Algorithm. We applied two algorithms consisting of Fuzzy Algorithm and Genetic Algorithm to combine the process to enhance the classification performance. In addition, we used Synthetic Minority Over-sampling Technique (SMOTE) to handle the imbalance data set. We conducted the experiments and found out that 5-fold cross-validation is the suitable approach, providing very good results compared with those obtained from other techniques.

Wissanu Thungrut, Naruemon Wattanapongsakorn
Decision Fusion Using Fuzzy Dempster-Shafer Theory

One of the popular tools in decision making is a decision fusion since there might be several sources that provide decisions for one task. The Dempster’s rule of combination is one of the decision fusion methods used frequently in many research areas. However, there are so many uncertainties in classifier output. Hence, we propose a fuzzy Dempster’s rule of combination (FDST) where we fuzzify the discounted basic probability assignment and compute the fuzzy combination. We also have a rejection criterion for any sample with higher belief in both classes, not only one of the classes. We run the experiment with 2 classifiers, i.e., support vector machine (SVM) and radial basis function (RBF). We test our algorithm on 5 data sets from the UCI machine learning repository and SAR images on three military vehicle types. We compare our fusion result with that from the regular Dempster’s rule of combination (DST). All of our results are comparable or better than those from the DST.

Somnuek Surathong, Sansanee Auephanwiriyakul, Nipon Theera-Umpon
Enhanced Manhattan-Based Clustering Using Fuzzy C-Means Algorithm

Fuzzy C-Means is a clustering algorithm known to suffer from slow processing time. One factor affecting this algorithm is on the selection of appropriate distance measure. While this drawback was addressed with the use of the Manhattan distance measure, this sacrifice its accuracy over processing time. In this study, a new approach to distance measurement is explored to answer both the speed and accuracy issues of Fuzzy C-Means incorporating trigonometric functions to Manhattan distance calculation. Upon application of the new approach for clustering of the Iris dataset, processing time was reduced by three iterations over the use of Euclidean distance. Improvement in accuracy was also observed with 50% and 78% improvement over the use of Euclidean and Manhattan distances respectively. The results provide clear proof that the new distance measurement approach was able to address both the slow processing time and accuracy problems associated with Fuzzy C-Means clustering algorithm.

Joven A. Tolentino, Bobby D. Gerardo, Ruji P. Medina
The Cooperation of Candidate Solutions Vortex Search for Numerical Function Optimization

This study presents the cooperation of candidate solutions vortex search called CVS that has been used for solving numerical function optimization. The main inspiration of CVS is that there have been some drawbacks of the Vortex Search (VS) algorithm. Although, the results from the proposal of VS are presented with a high ability but it could produce some drawbacks in updating the positions of vortex swarm. The VS used only single center generating the candidate solutions. The disadvantages happened when VS suffers from multi-modal problems that contain a number of local minima points. To overcome these drawbacks, the proposed CVS generated some cooperation of swarms which created from the diverse points. The experiments were conducted on 12 of benchmark functions. The capability of CVS was compared among the 5 algorithms: DE, GWO, MFO, VS and MVS. The results showed that CVS outperformed all of the comparisons of algorithms used.

Wirote Apinantanakon, Siriporn Pattanakitsiri, Pochra Uttamaphant
Integrating Multi-agent System, Geographic Information System, and Reinforcement Learning to Simulate and Optimize Traffic Signal Control

Traffic signal control (TSC) is an important problem that has been interested by many researchers and urban managers. Simulating and optimizing TSC for real-time control system is investigated recently with development by the Internet of things (IoT). The new model integrating Multi-agent system, geographic information system (GIS), and reinforcement learning to optimize TSC is proposed in this paper. The proposed simulation is minimizing total waiting time. Moreover, the simulation is applied into Ba Dinh ward, Hanoi, Vietnam for a case study.

Hoang Van Dong, Bach Quoc Khanh, Nguyen Tran Lich, Nguyen Thi Ngoc Anh

Image Processing

Frontmatter
Digital Image Watermarking Based on Fuzzy Image Filter

This paper presents an image watermarking method based on fuzzy image filter. In the embedding process, a unique Gaussian noise-like watermark is added into the blue color component from the RGB color space of original image. The value of each embedding bit is secured by the use of a secret key-based stream cipher. In the extraction process, a blind detection approach is used so that the original image is not required. The decreasing weight fuzzy filter with moving average value (DWMAV) is considered and applied to the watermarked component to remove the added watermark so that the original blue color component can be estimated. The extracted watermark is finally obtained by subtracting the estimated blue color component from the watermarked one. For performance comparison purpose, the weighted Peak Signal-to-Noise Ratio (wPSNR) is used for evaluating the quality of watermarked image, the normal correlation (NC) for the accuracy of extracted watermark, and the Stirmark benchmark for the robustness of embedded watermark. Experimental results confirm superiority of the proposed watermarking method compared to the two similar previous methods.

K. M. Sadrul Ula, Thitiporn Pramoun, Surapont Toomnark, Thumrongrat Amornraksa
Image Watermarking Against Vintage and Retro Photo Effects

Vintage and retro photo effects from various photo and camera software are popular. They are typically obtained from various image processing techniques. However when such effects are applied to watermarked image, they can partially or fully damage watermark information inside the watermarked image. In this paper, we propose an image watermarking method in spatial domain in which the embedded watermark can survive vintage and retro photo effects. Technically, the watermark is embedded by modifying the luminance component in YCbCr color space of a host color image. For watermark extraction, a mean filter is utilized to predict original embedding component from the watermarked components, and each watermark bit is blindly recovered by differentiating both components. The watermarked image’s quality is evaluated by weighted peak signal-to-noise ratio, while the extracted watermark’s accuracy is evaluated by bit correction rate. The experimental results under various percentages of vintage and retro photo effects show that our proposed method was superior to the previous method.

Piyanart Chotikawanid, Thitiporn Pramoun, Pipat Supasirisun, Thumrongrat Amornraksa
Performance Analysis of Lossless Compression Algorithms on Medical Images

Medical images play importance role in medical science. Through medical images, doctor can do more accurate diagnoses and treatment for patients. However, medical images consume large data size; therefore, data compression is necessary to be applied in medical images. This study presents a comparison of lossless compression techniques: LOCO-I, LZW, and Lossless JPEG algorithms which tested on 5 modalities of medical images. All the algorithms are theoretically and practically a lossless image compression which has MSE equal to 0. Moreover, LZW offers higher compression ratio and faster decompression process but slower compression process than the other two algorithms. Lossless JPEG has the lowest compression ratio and requires more time both to compress and decompress the images. Therefore, in general, in this study, LZW is the best algorithm to implement in compressing medical images.

I Made Alan Priyatna, Media A. Ayu, Teddy Mantoro
Brain Wave Pattern Recognition of Two-Task Imagination by Using Single-Electrode EEG

This research aims to develop a method using an Electroencephalography (EEG) based brain-computer interface technology for enabling people to answer a true/false question by imagining to an answer in their mind; especially, for people with disabilities who cannot communicate with normal ways. Since, a single-electrode EEG is accurately method to recognize the internal state of mind. The acquisition of EEG signals on frontal electrode, which is pasted on the forehead between the left eye and hair (Fp1) from eight subjects for classification of mental task into two classes. The mental tasks were induced by showing images with symbol or text to allow the users reply a true/false question by imagination to an answer. The event related potential (ERPs) features were determined from the processed EEG signals for two classes of mental task. The acquired brain wave data were processed by the OpenViBE platform. Then, the classification was performed using LibSVM and Multiple-Layers Perceptron (MLP), while the parameters were differently validated with 10-Fold Cross Validation methods, and were compared for selection the highest accuracy of classification. The results show that the proposed method achieves 91% on maximum in accuracy.

Sararat Wannajam, Wachirawut Thamviset
Isarn Dharma Handwritten Character Recognition Using Neural Network and Support Vector Machine

In the last decade, handwritten character recognition has become one of the most attractive and challenging research areas in field of image processing and pattern recognition. In this paper, we proposed handwritten character recognition for the Isarn Dharma character. We collected the character images by scanning ancient palm leaf manuscripts. The feature extraction techniques including zoning, projection histogram, and histogram of oriented gradient (HOG) were used to extract the feature vectors. ANN and SVM were used as classifiers in character recognition, and five-fold validation was used to evaluate the recognition results. The experiment result demonstrated that SVM classifier outperformed the other methods in all feature extractions. The recognition accuracy rate through the application of HOG was outstanding, and proved slightly better than HOG applied with zoning. This study further expresses that the gradient feature like HOG significantly outperformed the statistical features, such as zoning and projection histogram.

Saowaluk Thaiklang, Pusadee Seresangtakul

Network and Security

Frontmatter
Subset Sum-Based Verifiable Secret Sharing Scheme for Secure Multiparty Computation

Despite the information theoretic security of Shamir Secret Sharing Scheme and the ideality of Verifiable Secret Sharing Scheme in ensuring the honesty of a dealer of the shared secret and the shared secret itself, the detection and removal of an adversary posing as shareholder is still an open problem due to the fact that most of the studies are computationally and communicationally complex. This paper proposes a verifiable secret sharing scheme using a simple subset sum theory in monitoring and removing compromised shareholder in a secure multiparty computation. An analysis shows that the scheme cost minimal computational complexity of O(n) on the worst-case scenario and a variable-length communication cost depending on the length of the subset and the value of n.

Romulo L. Olalia Jr., Ariel M. Sison, Ruji P. Medina
The Semantics Loss Tracker of Firewall Rules

Frequently, firewall rules are overlapped and duplicated. The problems are usually resolved by merging rules. However, sometimes merged rules lead to the semantics loss. This paper proposed the tracker system for analyzing and alerting the semantics loss of firewall rules while they are being merged, namely SELTracker. SELTracker data structure is built from the Path Selection Tree (PST). PST does only keep all anomaly rules but also maintain normal rules. While firewall rules are being merged, SELTracker analyzes merging rules against PST. Based on the testing results, the proposed system has the ability to effectively detect the semantics loss. Moreover, SELTracker can also detect all other anomalies.

Suchart Khummanee
Cryptanalysis of a Multi-factor Biometric-Based Remote Authentication Scheme

Since the birth of the Internet technology, many remote user authentication protocols have been designed and developed so that secure communication between a user and a server is possible over an insecure channel. One of the first smart card-based biometric authentication schemes is known as the Li-Hwang protocol. Many researchers have found vulnerabilities and attempted to improve its security. Even if that is the case, this research still finds several weaknesses in the latest protocol proposed by Park et al. The vulnerabilities found in this research include an attack on the integrity of a protocol message, the possibility of a replay attack and the lack of proof of message authenticity.

Sirapat Boonkrong
Haptic Alternatives for Mobile Device Authentication by Older Technology Users

Turing tests are used to secure the human interaction on the Internet. Tests such as CAPTCHA are based on visual or auditory recognition of symbols and are difficult to distinguish by elderly people. A study examining the consistency of a tactile feedback-based Turing test identified an alternative to mainstream tests. This approach examines the vibration-based sensitivity which is detectable through skin surfaces when used to touch the screen of a mobile device. The study concentrated on a range of rough, smooth, sticky and coarse textures as possible differentiators for swipe-based tactile authentication using mobile devices. This study examined the vibration-based touch screen capabilities of 30 elderly people over the age of 65. The results of this study showed that tactile differentiation can be a viable alternative for device and security authentication for Turing tests such as those used for CAPTCHA and reCAPTCHA verification.

Kulwinder Kaur, David M. Cook
The New Modified Methodology to Solve ECDLP Based on Brute Force Attack

Elliptic curve cryptography (ECC) is one of public key cryptography suitable for the limited storages and low power devices. The reason is that ECC has the same security level with other public key cryptographies, although bits length is very small. However, ECC is based on Elliptic Curve Discrete Logarithm Problem (ECDLP) that is very difficult to be solved. At present, many algorithms were introduced to solve the problem. Nevertheless, the efficiency of each algorithm is based on the characteristic of k, Q = kP, when Q and P are known points on the curve, and type of curve. Deeply, brute force attack is one of techniques to solve ECDLP. This algorithm has very high performance when k is small. However, to find k, 2P, 3P, 4P, ···, (k − 1)P and kP must be computed. Thus, numbers of inversion process are k − 1. Moreover, for traditional brute force attack, y’s points must be computed all loops computation. In this paper, the new method based on brute force attack, is called Resolving Elliptic Curve Discrete Logarithm Problem by Decreasing Inversion Processes and Finding only x’s points (RIX-ECDLP), is proposed. The key is to remove some inversion processes and y’s points out of the computation. In fact, every two point additions can be done with only one inversion process. The experimental results show that RIX-ECDLP can reduce time about 10–20% based on size of k and prime number.

Kritsanapong Somsuk, Chalida Sanemueang

Information Technology

Frontmatter
Generations and Level of Information on Mobile Devices Usage: An Entropy-Based Study

Recently, mobile devices are now widely adopted in daily life. Several studies have examined the adoption of mobile devices in different context. However, previous research on the relationship between aging cohort and level of information in mobile devices is not consistent. Therefore, this study aims to investigate the usage behavior of mobile devices in users with different generations using information theory. Mobile applications were represented as a set of possible outcomes. The 95 datasets were collected using an online questionnaire and used to calculate the entropy values. The entropy values were compared within 4 generations including baby boomer, generation X, Y, and Z. The results indicated that baby boomer has the lowest level of information (1.58 bits) following by generation Z (2.48 bits), Y (2.69 bits), and X (3.11 bits), accordingly. The present study proved that self-evaluated questionnaire could be used for measuring the level of information in mobile devices.

Charnsak Srisawatsakul, Waransanang Boontarig
Requirement Patterns Analysis and Design of Online Social Media Marketing for Promoting Eco-Tourism in Thailand

This research aimed to identify the requirements of online social media marketing system for promoting eco-tourism in Thailand. It also aimed to design the system. The research methodology used a combination of qualitative interviews with 60 people and quantitative sample survey of 844 people, in order to confirm the findings. The results of the analysis showed that most of the findings were consistent. For the content needed, it included the featured attractions, travel itinerary, pictures, sights atmosphere, and costs/expenses. For the functionality needed, it included a search facility for eco-tourism destinations, a search to identify travel routes, and a capability of a system to share files, photos, and video. For the content structure of website Home Page, it should be organized by type of activity, by region, and by month. Finally, the system analysis and design were based on object-oriented principles in order to develop the system in the future.

Orasa Tetiwat, Vatcharaporn Esichaikul, Ranee Esichaikul
The Empirical Discovery of Near-Optimal Offline Cache Replacement Policies for Nonuniform Objects

Offline cache replacement policies are a solid foundation for developing online schemes. This paper presents the determination of near-optimal offline policies from suboptimal ones through the trace-driven simulation of six existing offline policies, $$C_0$$C0, $$C_0^*$$C0∗, $$Mattson's~OPT$$Mattson′sOPT, LFD-A, SMFD, and $$SMFD^*$$SMFD∗, in conjunction with three well-known online ones, LRU, GDSF, and LFUDA. The results indicated that SMFD and $$SMFD^*$$SMFD∗ were near-optimal over prior policies across byte-hit and delay-saving as well as hit performances in overall. A compelling finding is the achievement of an optimal policy’s performance upper bounds by the near-optimal policies.

Thepparit Banditwattanawong
Organic Farming Policy Effects in Northern of Thailand: Spatial Lag Analysis

The spatial exploratory and spatial lag are applied to analyze the spatial analysis of policy effects on organic farming in northern of Thailand. The empirical result explores that organic farming on one positive district affects to organic farming in neighbouring districts. Moreover, the spatial analysis indicates that relationship between farmers becomes an important factor that affect to grow organic farming. Therefore, to increase organic farming, the policy-maker should lay emphasis on implementing the proper organic farming policy in each area and creating the connection between farmers and organic communities.

Ke Nunthasen, Waraporn Nunthasen
Approaching Mobile Constraint-Based Recommendation to Car Parking System

Although the car parking system has been emerged since last decade, it seems that helping car owners to find their suitable parking lots has been required, continuously. This can been seen from an effort in developing infrastructure as well as adopting technology of many parking areas, continuously. In the era of Internet of Thing (IOT), mobile application allows individual to access to the system easily. Moreover, the system itself adopts techniques to facilitate users as much as possible. One of those popular technique is the recommendation technique - not limited to E-Commence system. This paper proposes the constraint based recommendation techniques that can support car owners to find their parking lots via mobile application, efficiently. An experiment was set up to test the performance of recommendation and found that it was acceptable.

Khin Mar Win, Benjawan Srisura

Software Engineering

Frontmatter
A Comparative Study on Scrum Simulations

Learning through simulation is an efficient method for illustrating the essentials of Scrum software development to those who are interested in further understanding modern software engineering techniques. Several models of simulations have been proposed in attempts to match needs and simulation environments. This paper compares the efficiency of two variations of Scrum simulation; the original “Scrum simulation with LEGO bricks” and the economic alternative “Plasticine Scrum”. Similarities, differences, advantages and disadvantages of both models are discussed in the findings.

Sakgasit Ramingwong, Lachana Ramingwong, Kenneth Cosh, Narissara Eiamkanitchat
Transformation of State Machines for a Microservice-Based Event-Driven Architecture: A Proof-of-Concept

The implementation of a state machine for a system or a subsystem in the form of an event-driven architecture (EDA) in a microservice environment requires model transformations. This paper presents a model-to-model transformation for this case based on UML state machines, UML class diagrams for the microservice architecture, a mathematical specification of the transformations and the implementation of the transformation with QVTo. A run-time state or workflow engine as a single-point-of-failure can be avoided with the choreographic and event-driven approach persevering the ability to generate a state machine as an additional system component that provides the advantages of a state or workflow engine features like monitoring, and visualization.

Roland Petrasch
Knowledge Discovery from Thai Research Articles by Solr-Based Faceted Search

Search engine plays an important role in information retrieval as being the preferred tool by users to locate and manage their desired information. The volume of online data has dramatically increased and this phenomenon of impressive growth leads to the need of efficient systems to deal with issues associated with storage and retrieval. Keyword search is the most popular search paradigm which prompts user to search the entire repository based on a few keywords. From research article collection, using keyword search only may not be enough for researchers to explore academic documents related to their interests from the entire repository. Knowledge discovery tool has recently received much attention in order to compensate the weakness of keyword search usage for academic collection. This paper presents the practical system design and implementation of a knowledge discovery tool in terms of faceted search. This study focused on Thai research articles for use by Thai scholars. The proposed faceted search system was constructed based on the Apache Solr search platform. The methodology of data preparation, knowledge extraction and implementation are discussed in the paper.

Siripinyo Chantamunee, Chun Che Fung, Kok Wai Wong, Chaiyun Dumkeaw
Backmatter
Metadata
Title
Recent Advances in Information and Communication Technology 2018
Editors
Prof. Herwig Unger
Prof. Sunantha Sodsee
Prof. Phayung Meesad
Copyright Year
2019
Electronic ISBN
978-3-319-93692-5
Print ISBN
978-3-319-93691-8
DOI
https://doi.org/10.1007/978-3-319-93692-5

Premium Partner