Skip to main content
Top

2018 | Book

Information Technology - New Generations

14th International Conference on Information Technology

insite
SEARCH

About this book

This volume presents a collection of peer-reviewed, scientific articles from the 14th International Conference on Information Technology – New Generations, held at the University of Nevada at Las Vegas on April 10–12, at Tuscany Suites Hotel in Las Vegas. The Book of Chapters addresses critical areas of information technology including web technology, communications, computing architectures, software engineering, security, and data mining.

Table of Contents

Frontmatter

Networking and Wireless Communications

Frontmatter
1. Performance Enhancement of OMP Algorithm for Compressed Sensing Based Sparse Channel Estimation in OFDM Systems

Long duration of the channel impulse response along with limited number of actual paths in orthogonal frequency division multiplexing (OFDM) vehicular wireless communication systems results in a sparse discrete equivalent channel. Implementing different compressed sensing (CS) algorithms enables channel estimation with lower number of pilot subcarriers compared to conventional channel estimation. In this paper, new methods to enhance the performance of the orthogonal matching pursuit (OMP) for CS channel estimation method is proposed. In particular, in a new algorithm dubbed as linear minimum mean square error-OMP (LMMSE-OMP), the OMP is implemented twice: first using the noisy received pilot data as the input and then using a modified received pilot data processed by the outcome of the first estimator. Simulation results show that LMMSE-OMP improves the performance of the channel estimation using the same number of pilot subcarrier. The added computational complexity is studied and several methods are suggested to keep it minimal while still achieving the performance gain provided by the LMMSE-OMP including using compressive sampling matching pursuit (CoSaMP) CS algorithm for the second round and also changing the way the residue is calculated within the algorithm.

Vahid Vahidi, Ebrahim Saberinia
2. CARduino: A Semi-automated Vehicle Prototype to Stimulate Cognitive Development in a Learning-Teaching Methodology

This paper aims to present the development of an semi-automated vehicle prototype using Arduino and sensors that will be controlled by software developed for Android that can simulate the execution of manual and semi-automatic paths according to the user’s needs. It will be presented both the physical and the logical development of the proposed vehicle and will be presented a set of experiments to demonstrate the feasibility of its application to different situations, emphasizing cognitive development in a learning-teaching methodology for children, youth and adults. Finally, we carried out a cost analysis in the market, based on some e-commerces available, to design the physical development of the proposed prototype.

Everton Rafael da Silva, Breno Lisi Romano
3. ICI Mitigation for High-Speed OFDM Communications in High-Mobility Vehicular Channels

The performance of an Orthogonal Frequency Division Multiplexing (OFDM) system to transmit high bandwidth data from a vehicle to a base station can suffer from Inter-Carrier Interference (ICI) created by high Doppler shifts. In current communication systems, high Doppler shifts can happen because of the high speed of the vehicles such as fixed wings unmanned aircraft vehicles (UAVs) and high speed trains (HST). In next generation wireless systems with high carrier frequency, such as 5G cellular data systems at center frequency between 27.5–71 GHz, even a vehicle moving with moderate speed can cause Doppler shift of several kilohertz. To cancel the ICI, the time variant channel matrix should be estimated in the frequency domain. In this paper, a new channel estimation scheme is presented suitable for high Doppler scenarios. To estimate the channel in the frequency domain, a training sequence in the time domain is transmitted, and both channel amplitudes and Doppler shifts are estimated in time domain. Then, the complete frequency domain channel matrix is constructed from the estimated parameters and used for ICI mitigation. In contrast to conventional methods that only estimate diagonal elements of the frequency domain channel matrix or other partial section of the matrix to reduce the complexity, this new method estimate the complete matrix. Simulation results show significant gain in performance for the complete channel estimation as compared to conventional methods using least square and minimum mean square diagonal elements of the channel estimators in high Doppler scenarios.

Vahid Vahidi, Ebrahim Saberinia
4. Mobile Payment Protocol 3D (MPP 3D) by Using Cloud Messaging

Popularity of mobile platform, makes it a proper candidate for electronic payment. However, there are some challenges in this field like privacy protection, security, limitation of mobile networks, and limited capabilities of mobile devices. Traditional e-commerce payment protocols were designed to keep track of traditional flows of data. These protocols are vulnerable to attacks and are not designed for mobile platform. Also, 3D Secure that is an extra security layer of modern payment methods (mainly to prevent card not present fraud), is not proper for mobile platform because of issues like difficulty of viewing authentication pop-up window on a mobile device. In this paper, we propose a new private mobile payment protocol based on client centric model by utilizing symmetric key operations. Our protocol reduces computational cost of Diffie-Hellman key agreement protocol by using algebra of logarithms instead of algebra of exponents, achieves proper privacy protection for payer by involving mobile network operators and generating temporary identities, avoids repudiation attacks by utilizing digital signatures, avoids replay attacks by using random time-stamp generated numbers, and provide better and safer customer experience by utilizing cloud messaging instead of text messaging and pop-up windows in its extra layer of security (3 domain authentication).

Mohammad Vahidalizadehdizaj, Avery Leider
5. Event-Based Anomalies in Big Data

The data stream generated in a social network or geophysical related or network flow is high speed, continuous, multi-dimensional, and contains massive data. Analytics require the insight behavior of the data stream. The government and business giants want to catch the exceptions to reveal the anomalies and take immediate action. To catch up the exceptions, the analysts need to identify the patterns in a single view of data stream trends, exceptions and catch up anomalies before the system collapses. In this paper, we present a system that detects the variations in the area of interest of data stream. The current research includes the classification of the data stream, detect the event type, commonly used detection methods, and interpret the detected events.

Yenumula B. Reddy
6. ACO-Discreet: An Efficient Node Deployment Approach in Wireless Sensor Networks

Wireless sensor networks (WSNs) rely on effective deployment of sensing nodes. Efficient sensor deployment with ensured connectivity is a major challenge in WSNs. Several deployment approaches have been proposed in literature to address the connectivity and efficiency of sensor networks. However, most of these works either lack in efficiency or ignore the connectivity issues. In this paper, we propose an efficient and connectivity-based algorithm by modifying the Ant Colony Optimization (ACO) (Liu and He, J Netw Comput Appl 39:310–318, 2014). Traditional ACO algorithms ensure coverage at a high cost and repetitive sensing, which results in resource wastage. Our proposed algorithm reduces the sensing cost with efficient deployment and enhanced connectivity. Simulation results indicate the ability of proposed framework to significantly reduce the coverage cost as well as achieve longer life time for WSNs.

Tehreem Qasim, Qurrat ul Ain Minhas, Alam Mujahid, Naeem Ali Bhatti, Mubashar Mushtaq, Khalid Saleem, Hasan Mahmood, M. Shujah Islam Sameem
7. Golden Linear Group Key Agreement Protocol

Security in group communication has been a significant field of research for decades. Because of the emerging technologies like cloud computing, mobile computing, ad-hoc networks, and wireless sensor networks, researchers are considering high group dynamics in communication security. Group key agreement protocol is for providing enough security for the group communication. Group key agreement protocol is a way to establish a shared cryptographic key between groups of users over public networks. Group key agreement protocol enables more than two users to agree on a shared secret key for their communications. In this paper we want to introduce a new key agreement protocol (GKA) and a new linear group key agreement protocol (GLGKA). These protocols are proper for the emerging dynamic networks. In GLGKA each member of the group can participate in key generation and management process. Our goal is to provide more efficient key generation approach for group members. This protocol can be used in cloud-based data storage sharing, social network services, smart phone applications, interactive chatting, video conferencing, and etc.

Mohammad Vahidalizadehdizaj, Avery Leider
8. Implementing an In-Home Sensor Agent in Conjunction with an Elderly Monitoring Network

In this paper, we present the design and implementation of the in-home sensor agent MaMoRu-Kun as an Internet of Things (IoT) smart device, developed via the “Research and development of the regional/solitary elderly life support system using multi-fusion sensors” project. At Akita Prefectural University, the bed and pillow sensors and corresponding monitoring system have been developed to watch elderly individuals at bedtime, in particular those who live alone. As a sensor agent, MaMoRu-Kun is connected to the in-home wireless network of the target individuals accommodations and collects trigger information from various switches, motion detection sensors, and a remote controller. This smart device is also able to send its entire set of data along with the status of the sensor to a collection and monitoring server connected via a long-term evolution (LTE) router. We implemented this agent using an Arduino and a Bluetooth-connected Android terminal.

Katsumi Wasaki, Masaaki Niimura, Nobuhiro Shimoi
9. How Risk Tolerance Constrains Perceived Risk on Smartphone Users’ Risk Behavior

This study focused on smartphone users’ intention to install anti-virus software on their devices. In addition to the three tested factors, risk tolerance, perceived, and risk awareness, the models also include several demographic and behavioral variables as controlled factors including tenure of using smartphone, average online time per day, average online time using smartphone per day, gender, age, education level, and monthly expenditure. The results showed that lower risk tolerance, higher perceived risk, and higher risk awareness will lead to higher intention to installation. The constraining effect of risk tolerance on the relationship between perceived risk and intention to installation was also tested significantly. When smartphone users have risk tolerance higher than the threshold, their intention to installation will not be affected by the perceived risk.

Shwu-Min Horng, Chia-Ling Chao
10. When Asteroids Attack the Moon: Design and Implementation of an STK-Based Satellite Communication Simulation for the NASA-Led Simulation Exploration Experience

The Simulation Exploration Experience (SEE) is an annual, inter-university, distributed simulation challenge led by NASA. A primary objective is to provide a platform to college students to work in highly dispersed teams to design, develop, test, and execute a simulated lunar mission using High Level Architecture. During the SEE in 2016, 19 federates developed by student teams from three continents successfully joined the HLA federation and collaborated to accomplish a lunar mission. The Midwestern State University first participated in SEE and developed a communication satellite federate which broadcasts alert to physical entities on the moon surface about the incoming of an asteroid. This paper describes the design of the communication federate, the federation object model, lessons learned and recommendations for future federate development.

Bingyang Wei, Amy Knowles, Chris Silva, Christine Mounce, Anthony Enem
11. Wireless Body Sensor Network for Monitoring and Evaluating Physical Activity

Since the physical inactivity is one of the four main risk factors for the incidence of Non-Communicable Diseases, the World Health Organization has stimulated the creation of actions to promote regular physical activity practices. The Brazilian Ministry of Health established a physical activity program, where people perform physical activities under the supervision of health professionals. In order to real-time monitoring individuals during their physical activity practices we developed an ubiquitous computing environment. This environment is composed of three modules that automatically collect physiological data, and provide indicators which will support public policies for promoting physical activity. This paper presents this environment focusing on the Wireless Body Sensor Network module, and its simulation that was performed using the OMNet++ 5 tool. The simulation results showed a packet loss due to the simultaneous delivery of packets to the coordinator node, which caused a network bottleneck. In order to deal with this problem, we designed a communication protocol to be run at the application layer that allows the host nodes to send packets in turns, avoiding this way the packet loss.

Leonardo Schick, Wanderley Lopes de Souza, Antonio Francisco do Prado
12. Techniques for Secure Data Transfer

The Advanced Encryption Standard better known as the AES algorithm is a symmetric (uses the same key to encrypt and decrypt) cryptographic technique used in most of today’s classified and unclassified data transfers. The AES algorithm provides data transfers with layers of security through its mathematical complexity. Alongside this technique of encryption, the act of concealing data within different objects is now becoming an essential component in the art of secure data transfer. This method of hiding secret messages within a file type is known as Steganography. Even though these two elements of secure data transfer differ, they both share the same objective namely to protect the integrity of the data. This paper provides an explanation of Steganography and AES algorithm and how they can be used together to enhance the security of data. The experiments that this paper demonstrates used the AES-256 algorithm.

Jeffery Gardner Jr.
13. Rate Adjustment Mechanism for Controlling Incast Congestion in Data Center Networks

Data Center Transmission Control Protocol (DCTCP) gained more popularity in academic as well as industry areas due to its performance in terms of high throughput and low latency and is widely deployed at data centers nowadays. According to recent research about the performance of DCTCP, the authors found that most of the times the sender’s congestion window reduces to one segment which results in timeouts. To address this problem, we modified the calculation of sender’s congestion window size for improving the throughput of TCP in data center networks. The results of a series of simulations in a typical data center network topology using Qualnet, the most widely used network simulator demonstrates that the proposed solution can significantly reduce the timeouts and noticeably improves the throughput by more than 10% compare to DCTCP under various network conditions.

Prasanthi Sreekumari
14. Performance Analysis of Transport Protocols for Multimedia Traffic Over Mobile Wi-Max Network Under Nakagami Fading

Due to increased demand in multimedia streaming over Internet, research focus is directed to derive new protocols with added features. These protocols can be best adjusted to multimedia traffic over the Internet. The operations of multimedia networks require fast and high processing communication systems, because sometimes prompt delivery of information is critical. The behavior of different transport protocol can affect the quality of service. The advancement in hardware technologies used at the physical layer and TCP/IP suite has made a great progress, but other layers could not progress with the same pace. This paper argues the challenges posed by multimedia traffic on other layers and their effect on the transport layer services. Numerous papers have illustrated transport protocols for multimedia networks in some preferred and particular scenarios. However, no one have discussed the effect of channel fading on these protocols. Six existing protocols and their performances under Nagagami-m channel are the theme of this paper with respect to suitability for multimedia applications, and the performances of these protocols have been evaluated. This paper concludes that extra effort are essential on the transport protocol for achieving better performance of multimedia networks which will fulfill the various necessities of evolving multimedia applications, specifically under fading channels.

Fazlullah Khan, Faisal Rahman, Shahzad Khan, Syed Asif Kamal

Cybersecurity

Frontmatter
15. CyberSecurity Education and Training in a Corporate Environment in South Africa Using Gamified Treasure Hunts

Breaches in computer security can have economical, political, and social costs. In a busy corporate IT department, unfortunately, there is often not time enough to train programmers and other technical people on the best practices of computer security and encryption. This paper looks at the use of gamified treasure hunts to encourage participants to follow digital clues in order to find a physical treasure box of chocolates. By following the digital clues, participants in the treasure hunts will learn about such important security tools as Cryptography, GnuPG, SSL, HTTPS, Jasypt, Bouncy Castle, Wireshark, and common attack techniques.

Laurie Butgereit
16. Router Security Penetration Testing in a Virtual Environment

A router is the first line of defense in a typical network. Improper configurations of the router may lead to various security vulnerabilities. Virtualization provides a safe and self-contained environment for network simulation and security testing. This paper uses a virtual penetration testing environment to simulate and analyze the two phases of a typical Advanced Persistent Threat (APT): (1) incursion by way of reconnaissance (passive information gathering), and (2) discovery by initial compromise to exploit vulnerabilities found in routers linking a corporate network with the untrusted zone of the inherently unsecure World Wide Web.

Christian Scully, Ping Wang
17. Advanced Machine Language Approach to Detect DDoS Attack Using DBSCAN Clustering Technology with Entropy

Service availability is the major and primary security issue in cloud computing environments. Currently existing solutions that address service availability-related issues that can be applied in cloud computing environments are insufficient. In order to ensure the high availability of the offered services, the data centers resources must be protected from DDoS attack threats. DDoS is the major and the most serious security threat that challenges the availability of the data centers resources to the intended clients. The existing solutions that monitor incoming traffic and detect DDoS attacks become ineffective if the attacker’s traffic intensity is high. Therefore, it is necessary to devise schemes that will detect DDoS attacks even when the traffic intensity is high; such schemes must deactivate DDoS attackers and serve the legitimate users with available re-sources. This research paper addresses the need to prevent DDoS attacks by defining and demonstrating a hybrid detection model by introducing an advanced and efficient approach to recognize and efficiently discriminate the flood attacks from the flush crowd (legitimate access). Moreover, this paper introduce and discusses, most importantly, the application of multi-variate correlation among the selected and ranked features to significantly reduce the false alarm rate, which is one of the major issue associated with the current available solution.

Anteneh Girma, Mosses Garuba, Rajini Goel
18. A Novel Regular Format for X.509 Digital Certificates

Digital certificates are one of the key components to ensure secure network communications. The complexity of the certificate standard, ITU-R-X.509, has led to a number of breaches in the TLS protocol security due to certificate misinterpretation by TLS libraries. We argue that the root cause of such an issue is the complexity of the certificate structure, which can be gauged with the framework of formal language theory: the language describing digital certificates is context sensitive. Such a complexity led to handcrafted X.509 parsers, resulting in implementations which are not guaranteed to perform correct language recognition. We highlight the issues in X.509, and propose a new format for digital certificates, designed to be parsed effectively and efficiently, while retaining the same semantic expressiveness. The certificate format can be deployed gradually, is fully specified as a regular language, and is specified as a formal grammar from which a provably correct parser can be automatically derived. We validate the effectiveness of our proposal, and the linear running time provided by the approach, generating an instance of the parser with a production grade lexer/parser generation framework.

Alessandro Barenghi, Nicholas Mainardi, Gerardo Pelosi
19. A Software-Defined Networking (SDN) Approach to Mitigating DDoS Attacks

Distributed Denial of Service (DDoS) attacks are a common threat to network security. Traditional mitigation approaches have significant limitations in addressing DDoS attacks. This paper reviews major traditional approaches to DDoS, identifies and discusses their limitations, and proposes a Software-Defined Networking (SDN) model as a more flexible, efficient, effective, and automated mitigation solution. This study focuses on Internet Service Provider (ISP) networks and uses the SDN security implementation at Verizon networks as a case study.

Hubert D’Cruze, Ping Wang, Raed Omar Sbeit, Andrew Ray
20. Integrated Methodology for Information Security Risk Assessment

Information security risk assessment is an important component of information security management. A sound method of risk assessment is critical to accurate evaluation of identified risks and costs associated with information assets. This paper reviews major qualitative and quantitative approaches to assessing information security risks and discusses their strengths and limitations. This paper argues for an optimal method that integrates the strengths of both quantitative calculation and qualitative evaluation for information security risk assessment.

Ping Wang, Melva Ratchford
21. Techniques for Detecting and Preventing Denial of Service Attacks (a Systematic Review Approach)

This paper analyzes denial of service (DoS) attacks and countermeasures based on a systematic review analysis conducted of papers between 2000 and 2016. The paper is based on three searches. The first was conducted using suitable keywords, the second using references used by selected papers, and, the third considered the most cited English-language articles. We discuss 802.11 along with one of the well-known DoS attacks at physical-level access points. Experts suggest using 802.11w, a “cryptographic client puzzle,” and “delaying the effect of request” to provide better protection in this layer. The paper discusses four main network defense systems against network-based attacks—source-end, core-end, victim-end, and distributed techniques—with a focus on two innovative methods, the D-WARD and gossip models. This study also discusses chi-squares and intrusion detection systems (IDSs), two effective models to detect DoS and DDoS attacks.

Hossein Zare, Mojgan Azadi, Peter Olsen
22. Open Source Intelligence: Performing Data Mining and Link Analysis to Track Terrorist Activities

The increasing rates of terrorism in Africa is a growing concern globally, and the realization of such dreadful circumstances demonstrates the need to disclose who is behind such terrible acts. Terrorists and extremist organizations have been known to use social media, and other forms of Internet-enabled technologies to spread idealism. Analyzing this data could provide valuable information regarding terrorist activity with the use of Open Source Intelligence (OSINT) tools. This study attempts to review the applications and methods that could be used to expose extremist Internet behavior.

Maurice Dawson, Max Lieble, Adewale Adeboje
23. A Description of External Penetration Testing in Networks

This paper synthesizes multiple sources in order to describe external penetration testing. In a world of ever more sophisticated network attacks this type of knowledge is useful for both security professionals and the laymen. This paper describes some general history of, uses of, and processes of external penetration testing, and communicates the philosophy behind it. This paper concludes by summarizing some potential weaknesses of penetration testing, but also communicates its importance in establishing the security of any system.

Timothy Brown
24. SCORS: Smart Cloud Optimal Resource Selection

Cloud storages have recently increased rapidly in many organizations as they provide many benefits and advantages for the users. These advantages include easiness in use, unlimited capacity in storage, scalability and cost effectiveness. This has motivated the concerns regarding the cloud issues including cost, performance and security. Many different schemes have been proposed to address the issues of cloud storage. Each scheme considers these issues from different perspectives without employing an optimal solution that fully satisfies the user requirements. In real world applications, the client's objectives to move to cloud storage vary from single objective to several conflicting objectives. As a result, there is a need to find a tradeoff setup for a multiple cloud storage scheme. This paper proposed Smart Cloud Optimal Resource Selection (SCORS) scheme by applying non-dominated sorting genetic algorithm (NSGA-2) to resolve the user's conflicting requirements. Our optimal solution facilitates setting up multiple cloud storages in an optimized way based on the objectives specified. Since each user has different objectives, all combinations of cloud storage client needs are collected and transformed into different multi-objective optimization problems with constraints and bounds. Experiment results show that SCORS scheme determine the optimal solution from the set of feasible solutions in an efficient way.

Fahad Alsolami, Yaqoob Sayid, Nawal Zaied, Isaac Sayid, Iman Ismail
25. Towards the Memory Forensics of MS Word Documents

Memory forensics plays a vital role in digital forensics. It provides important information about user’s activities on a digital device. Various techniques can be used to analyze the RAM and locate evidences in support for legal procedures against digital perpetrators in the court of law. This paper investigates digital evidences in relation to MS Word documents. Our approach utilizes the XML representation used internally by MS Office. Different documents are investigated. A memory dump is created while each of these documents is being viewed or edited and after the document is closed. Used documents are decompressed and the resulting folders and XML files are analyzed. Various unique parts of these extracted files are successfully located in the consequent RAM dumps. Results show that several portions of the MS Word document formats and textual data can be successfully located in RAM and these portions would prove that the document is/was viewed or edited by the perpetrator.

Ziad A. Al-Sharif, Hasan Bagci, Toqa’ Abu Zaitoun, Aseel Asad
26. Two Are Better than One: Software Optimizations for AES-GCM over Short Messages

This paper describes some software optimizations for AES-GCM over short messages, applicable for modern processors that have dedicated instructions. By processing two (short) messages in parallel, we achieve better performance than by processing twice, back-to-back, a single (short) message. Additional performance is gained if the using application collects several messages, sorts them by order of length, and the feeds them (in pairs) to the two-message AES-GCM function. For example, our experiments carried out on the latest Intel processor (micro architecture codename Skylake), over a realistic distribution of message lengths, our optimization achieves up to 1.95x speedup, compared to OpenSSL.

Shay Gueron, Regev Shemy
27. BYOD Security Risks and Mitigations

Adoptions of BYODs (Bring Your Own Devices) have been fast growing in modern organizations, which leads to improvement in convenience and productivity. However, these benefits may be short-lived if companies are not aware of the cost of potential legal implications as well as increases in information security vulnerabilities and risks. This paper discusses the legal and privacy-associated issues that organizations may encounter when adopting BYODs. These include legal considerations on issues regarding US privacy laws, comingled data, device ownership, spoliation of evidence, variety of BYODs, cloud services and mobile solutions. It also proposes some recommendations in order to mitigate these types of risks. In addition, this paper introduces the BYOD policy and management practices at Verizon Wireless as an organizational case study for analysis and recommendations on how to mitigate security risks associated with adoptions of BYODs.

Melva Ratchford, Ping Wang, Raed Omar Sbeit
28. Evaluation of Cybersecurity Threats on Smart Metering System

Smart metering has emerged as the next-generation of energy distribution, consumption, and monitoring systems via the convergence of power engineering and information and communication technology (ICT) integration otherwise known as smart grid systems. While the innovation is advancing the future power generation, distribution, consumption monitoring and information delivery, the success of the platform is positively correlated to the thriving integration of technologies upon which the system is built. Nonetheless, the rising trend of cybersecurity attacks on cyber infrastructure and its dependent systems coupled with the system’s inherent vulnerabilities present a source of concern not only to the vendors but also the consumers. These security concerns need to be addressed in order to increase consumer confidence so as to ensure greatest adoption and success of smart metering. In this paper, we present a functional communication architecture of the smart metering system. Following that, we demonstrate and discuss the taxonomy of smart metering common vulnerabilities exposure, upon which sophisticated threats can capitalize. We then introduce countermeasure techniques, whose integration is considered pivotal for achieving security protection against existing and future sophisticated attacks on smart metering systems.

Samuel Tweneboah-Koduah, Anthony K. Tsetse, Julius Azasoo, Barbara Endicott-Popovsky
29. A Layered Model for Understanding and Enforcing Data Privacy

In this paper, we propose a layered model for the understanding and enforcing of information privacy. The proposed model consists of three levels. At the lowest level, called the Read/Write Layer, privacy is defined as the resistance and resilience to Read or Write violations in the information or information source. At the middle level, the sharing layer, a logical privacy connection can be set up between a source and sink based on an embedded privacy agreement (EPA). At the highest layer, the trust layer, privacy is determined based on the history of sharing between directly connected network entities. We describe how the privacy metrics differ at each layer and how they can be combined to have a three-layer information privacy model. This model can be used to assess privacy in a single-hop network and to design a privacy system for sharing data.

Aftab Ahmad, Ravi Mukkamala
30. BYOD: A Security Policy Evaluation Model

The rapid increase of personal mobile devices (mainly smartphones and tablets) accessing corporate data has created a phenomenon commonly known as Bring Your Own Device (BYOD). Companies that allow the use of BYODs need to be aware of the risks of exposing their business to inadvertent data leakage or malicious intent posed by inside or outside threats. The adoption of BYOD policies mitigates these types of risks. However, many companies have weak policies, and the problem of exposure of corporate data persists. This paper addresses this problem by proposing a BYOD policy evaluation method to help companies to strengthen their BYOD policies.This initial research proposes a novel BYOD security policy evaluation model that aims to identify weaknesses in BYOD policies using mathematical comparisons. The results are measurable and provide specific recommendations to strengthen a BYOD policy. Further research is needed in order to demonstrate the viability and effectiveness of this model.

Melva M. Ratchford
31. A Novel Information Privacy Metric

With the ever-increasing need for sharing data across a wide spectrum of audience, including commercial enterprises, governments, and research organizations, there is an equally growing concern about the privacy of such data. In this paper, we address a specific sharing subdomain where the data is made available only to a selected few and not for public access. Even under this constrained sharing, there are possibilities for privacy violations. The paper addresses issues of quantifying such privacy violations taking into account the degree of trust between the sharing parties. We employ an Embedded Privacy Agreement-based system to evaluate the privacy violation.

Aftab Ahmad, Ravi Mukkamala
32. An Integrated Framework for Evaluating the Security Solutions to IP-Based IoT Applications

As Internet of Things (IoT) applications have taken the center stage of the technology development recently, the security issues and concerns arise naturally and significantly due to IoT connection to the anonymous and untrusted internet. Although the security protocols and technology for the internet applications have been studied for decades, the ubiquity and heterogeneity of IoT applications present unique challenges in handling security issues and problems. In addition to developing new protocols or to upgrade the existing protocols, some research has been done in experimenting security approaches for IoT applications. Since the results of the current research are mainly based on either the related protocols or the applicable approaches, the results of the discussion is often limited to a particular environment or a specific situation. In this paper, based on a thorough study on the existing research accomplishment and published experiment results, an integrated framework is proposed for evaluating the security solutions for IP-based IoT applications with the considerations in hardware constraints, operational constraints and network scenarios. The results of the study shows the potentials in drawing a balanced view in evaluating the security solutions to IP-based IoT applications and laying a step-stone for the further standardization of related IoT protocols and approaches for the security issues.

Gayathri Natesan, Jigang Liu, Yanjun Zuo
33. Towards the Security of Big Data: Building a Scalable Hash Scheme for Big Graph

Big graph model is frequently used to represent large-scale datasets such as geographical and healthcare data. Deploying these datasets in a third-party public cloud is a common solution for data sharing and processing. Maintaining data security in a public cloud is crucial. Here we target the data integrity issue, which is mainly achieved by hash operations. Existing hash schemes for graphs-structured data are either not suitable to all type of graphs or not computationally efficient for big graph. In this paper, we propose a secure, scalable hash scheme that is applicable to big graphs/trees, and its computation is highly efficient. We use the graph structure information to make our scheme unforgeable. Furthermore, we skillfully tune the scheme to make the graph verification and update processes very efficient. We will prove that our hash scheme is cryptographically secure. Our experimental results show that it has scalable computation performance.

Yu Lu, Fei Hu, Xin Li
34. A Cluster Membership Protocol Based on Termination Detection Algorithm in Distributed Systems

In distributed systems, a cluster of computer should continue to do cooperation in order to finish some jobs. In such a system, a cluster membership protocol is especially practical and important elements to provide processes in a cluster of computers with a consistent common knowledge of the membership of the cluster. Whenever a membership change occurs, processes should agree on which of them should do to accomplish an unfinished job or begins a new job. The problem of knowing a stable membership view is very same with the one of agreeing common predicate in a distributed system such as the consensus problem. Based on the termination detection protocol that is traditional one in asynchronous distributed systems, we present the new cluster membership protocol in distributed wired networks.

SungHoon Park, SuChang Yoo, BoKyoung Kim

Information Systems and Internet Technology

Frontmatter
35. An Adaptive Sensor Management System

This paper proposes an adaptive sensor management system which consists of five main components, including the Sensor Web Interface, Request Dispatcher, Publish/ Subscribe Management, Communication Converter and Message Broker. The Request Dispatcher can automatically allocate the tasks of Sensor Web to each module as the service requested. The Communication Converter let the Message Broker handle the operations of Sensor Web, notify sensors and modify the information of sensors through the Sensor Web Interface. Two sensor control methods are defined for publication and subscription. The experimental results by the designed prototype system show that this system can effectively perform sensor management.

Chyi-Ren Dow, Yaun-Zone Wu, Bonnie Lu, Shiow-Fen Hwang, Fongray Frank Young
36. A Cloud Service for Graphical User Interfaces Generation and Electronic Health Record Storage

The development of archetype-based Health Information Systems (HIS) allows the creation of interoperable mechanisms for the Electronic Health Record (EHR) as well as improvements for application maintenance and upgrades. However, we identified a lack of an approach or tool to build dynamic archetype-based data schema in heterogeneous database. This article presents a cloud service able to build Graphical User Interface (GUI) and data schema for use in the healthcare sector. Using data attributes, terminologies and constraints extracted from EHR archetypes, it dynamically generates GUI and data schema in heterogeneous databases. In order to persist EHR data, we used the concept of polyglot persistence to store structured data in a relational database, while non-structured data is stored in a NoSQL database. Finally, we validated the proposed service in a hospital located in northeastern Brazil and demonstrate how health professionals can build GUI for use in the healthcare sector without depending on a software development team.

André Magno Costa de Araújo, Valéria Cesário Times, Marcus Urbano da Silva
37. A Petri Net Design and Verification Platform Based on The Scalable and Parallel Architecture: HiPS

This paper proposes an on-the-fly linear temporal logic model checker using state-space generation based on Petri net models. The hierarchical Petri net simulator (HiPS) tool, developed by our research group at Shinshu University, is a design and verification environment for Place/Transition nets and is capable of generating state-space and trace-process graphs. In combination with external tools, HiPS can perform exhaustive model checking for a state space. However, exhaustive model checking is required for generating a complete state space. On-the-fly model checking is an approach for solving the explosion problem to generate a portion of the overall state space by parallelizing the search and generation processes. In this study, we propose a model checker for a Petri net model to concurrently function with state-space generation using an interprocess communication channel. By utilizing the concept of fluency, we implement automata-based model checking for Petri nets. This implementation achieves high-efficiency for on-the-fly verification, which is independent of the verification and state-space generation processes.

Yojiro Harie, Katsumi Wasaki
38. Optimally Compressed Digital Content Delivery Using Short Message Service

Researchers are devising new ways for robust digital content delivery in situations where telecommunication signal strength is very low, especially during natural disasters. In this paper, we present research work targeting two dimensions: (a) We selected the IANA standard for digital content classification, 20 types in 5 categories; applied and compared five different lossless compression schemes (LZW, Huffman coding, PPM, Arithmetic Coding, BWT and LZMA) on these 20 data types; (b) A generic prototype application which encodes (for sending) and decodes (on receiving) the compressed digital content over SMS. Sending digital contents via SMS over satellite communication is achieved by converting digital content into text; apply lossless compression on the text and transmit the compressed text by using SMS. Proposed method does not require Internet Service and also not requires any additional hardware in existing network architecture to transmit digital contents. Results show that overall PPM compression method offers best compression ratio (0.63) among all compression schemes Thus PPM reduces the SMS transmission saving up to 43%, while LZW performs the least with 17.6%.

Muhammad Fahad Khan, Mubashar Mushtaq, Khalid Saleem
39. The Method of Data Analysis from Social Networks using Apache Hadoop

This article analyzes data from social networks. The social microblogging system called Twitter is taken as a data source. In the model of distributed computing MapReduce has been used for the implementation of the algorithm for searching the user communities. Apache Hadoop has been chosen as a platform for distributed computing. The program code was developed for retrieving tweets and distributed processing. The analysis of the interests of users of Twitter was conducted.

Askar Boranbayev, Gabit Shuitenov, Seilkhan Boranbayev
40. Using Web Crawlers for Feature Extraction of Social Nets for Analysis

This paper presents a crawler based feature extraction technique for social network analysis. This technique crawl a predefined actor and his associated activities in social network space. From the activities, a set of features are extracted that can be used for a broad spectrum of social network analysis. The utility can act as a middle ware providing a level of abstraction to researchers involved in social network analysis. The tools provide a formatted set of ready features with open APIs that can be easily integrated in any application.

Fozia Noor, Asadullah Shah, Waris Gill, Shoab Ahmad Khan
41. Detecting Change from Social Networks using Temporal Analysis of Email Data

Social network analysis is one of the most recent areas of research which is being used to analyze behavior of a society, person and even to detect malicious activities. The information of time is very important while evaluating a social network and temporal information based analysis is being used in research to have better insight. Theories like similarity proximity, transitive closure and reciprocity are some well-known studies in this regard. Social networks are the representation of social relationships. It is quite natural to have a change in these relations with the passage of time. A longitudinal method is required to observe such changes. This research contributes to explore suitable parameters or features that can reflect the relationships between individual in network. Any foremost change in the values of these parameters can capture the change in network. In this paper we present a framework for extraction of parameters which can be used for temporal analysis of social networks. The proposed feature vector is based on the changes which are highlighted in a network on two consecutive time stamps using the differences in betweenness centrality, clustering coefficient and valued edges. This idea can further be used for detection of any specific change happening in a network.

Kajal Nusratullah, Asadullah Shah, Muhammad Usman Akram, Shoab Ahmad Khan
42. The Organisational Constraints of Blending E-Learning Tools in Education: Lecturers’ Perceptions

This study investigated and identified the organizational factors that contribute to the poor adoption of technology such as the Sakai Learning Management System in education. Qualitative data were collected through semi-structured interview questions guided by Giddens’ structuration model. The participant lecturers were from one of the nineteen universities in Zimbabwe, a developing country where a sluggard uptake of Information and Communication Technologies is currently experienced. The situation not only prejudices the students from enjoying the affordances their counterparts in developed nations enjoy. It has also led to the emergence of the second order digital divide, a problem of concern to the researchers, ICT policy makers and the learning institution management, robbed of the anticipated returns on the costly technological investment. The paper contributes to the limited literature relating to the developing country lecturers’ perceptions of e-learning tools in teaching. The findings show that organizational factors play a major role in influencing either the lecturers’ positive or negative perceptions of e-learning system tools in education. In addition to the documented individual and technological factors, policy, budget, training, decision making, implementation and consultation techniques have been found to inhibit the successful integration of e-learning tools in the traditional teaching methods.

Sibusisiwe Dube, Elsje Scott
43. A Rule-Based Relational XML Access Control Model in the Presence of Authorization Conflicts

There is considerable amount of sensitive XML data stored in relational databases. It is a challenge to enforce node level fine-grained authorization policies for XML data stored in relational databases which typically support table and column level access control. Moreover, it is common to have conflicting authorization policies over the hierarchical nested structure of XML data. There are a couple of XML access control models for relational XML databases proposed in the literature. However, to our best knowledge, none of them discussed handling authorization conflicts with conditions in the domain of relational XML databases. Therefore, we believe that there is a need to define and incorporate effective fine-grained XML authorization models with conflict handling mechanisms in the presence of conditions into relational XML databases. We address this issue in this study.

Ali Alwehaibi, Mustafa Atay

Entertainment Technology and Education

Frontmatter
44. A Practical Approach to Analyze Logical Thinking Development with Computer Aid

Logical thinking is essential to one’s development. It lays a foundation to acquire knowledge and skills that are used to solve many problems, not only in school, but also to execute tasks and make decisions. Nowadays, interactive technologies have a potential to be applied in education and are presented as a facilitator in both learning and teaching activities. This paper aims to present an application of an approach to analyze the logical thinking development with computer aid. To do so, we developed and applied a test environment to two student groups to assess any improvement in learning and, consequently, in logical thinking. The results pointed out to a visible development of students’ logical thinking, showing that they did not have any difficulties to execute the proposed tasks. This allowed us to build an illustrated view of thought trainings that help them to achieve their goals and make decisions. This paper also highlights the importance of using computational tools to support teachers in classrooms and to stimulate the development of logical thinking.

Breno Lisi Romano, Adilson Marques da Cunha
45. A Transversal Sustainability Analysis of the Teaching-Learning Process in Computer Students and Groups

Since the first UN conference on the environment, in 1972, many agreements seeking to establish goals to balance economic and social growth with environment preservation have been made. Regarding Information Technology (IT), the Green IT concept comes up. Such concept can contribute to a more sustainable environment and ensure economic benefits. In this context, we conducted a transversal field research using the survey method on a sample of 150 students of a technical IT course from five campuses of the Sao Paulo Federal Institute of Education, Science and Technology in order to identify their competences (knowledge, abilities and actions) regarding sustainability in its broad aspect and applied to IT. As a result, there is an opportunity to work on the sustainability concept and use it to turn the students into a collective transformational agent. It was also identified the need to further develop their abilities related to Green IT and its importance to the IT field.

Fernanda Carla de Oliveira Prado, Luciel Henrique de Oliveira
46. Cloud Computing: A Paradigm Shift for Central University of Technology

Education is key in today’s generation. It helps the mind to think critically and shape it to produce innovations every day. Academics and Universities of technology are currently exploring new technologies to advance the methods used in teaching and learning. One of the technologies that has recently emerged is Cloud Computing. Cloud Computing is a disseminated computing that allow software and hardware as a service via the internet instead of you having a hardware or software sitting on your desktop or somewhere inside your company’s network. Cloud Computing is a type of computing that depends on distributing computing resources instead of having a local servers or personal devices to handle applications. Cloud Computing, as a new paradigm, can offer institutions quality education by providing the latest infrastructure in terms of hardware and software. This paper focuses on the introduction of Cloud Computing at Central University of Technology to improve teaching and learning methods.

Dina Moloja, Tlale Moretlo
47. Peer Music Education for Social Sounds in a CLIL Classroom

Motivation is an important factor in a CLIL (Content and Language Integrated Learning) classroom and it is the key to success in the learning process. ICT (Information and Communication Technology) represents an important tool to improve the students’ motivation and the it offers opportunities to develop both academic knowledge and language skills. The aim of this paper is to analyze the impact of the ICT in a CLIL Course and the connected effects relating to the use of the foreign language in a musical field. We present the findings of a case study scenario showed that OPEN SoundS, which is a musical environment designed and developed as a virtual studio where students and teachers (from all over the world) can create collaborative musical projects, is a very usable tool and is highly appreciated by teachers and students. The study highlights, on the one hand, a substantial improvement of the learning process of the students thanks to the impact that Open SoundS had on their motivation. On the other hand, the importance for teachers to use new tools.

Della Ventura Michele
48. Teaching Distributed Systems Using Hadoop

Databases and Distributed Systems have a fundamental relevance in Computer Science; they are usually presented in courses where the high-level of abstraction characterizes the teaching and learning processes. Consequently, the teaching method needs to evolve to fulfill the present requirements. Therefore, grounded in these concepts, the main goal of this paper is to introduce a teaching methodology via benchmark tests. Our methodology was conducted using the Hadoop framework, and it is innovative and proved effective. Our methods allow students to be exposed to complex data, system architecture, network infrastructure, trending technologies and algorithms. During the courses, students analyzed the performance of some computational architectures through benchmark tests on local and on the cloud. Along with this scenario, they evaluate the processing time of each architecture. As a result, our methodology proved to be a support learning method, which allows students to have contact with trending tools.

Ronaldo C. M. Correia, Gabriel Spadon, Danilo M. Eler, Celso Olivete Jr., Rogério E. Garcia
49. MUSE: A Music Conducting Recognition System

In this paper, we introduce Music in a Universal Sound Environment(MUSE), a system for gesture recognition in the domain of musical conducting. Our system captures conductors’ musical gestures to drive a MIDI-based music generation system allowing a human user to conduct a fully synthetic orchestra. Moreover, our system also aims to further improve a conductor’s technique in a fun and interactive environment. We describe how our system facilitates learning through a intuitive graphical interface, and describe how we utilized techniques from machine learning and Conga, a finite state machine, to process inputs from a low cost Leap Motion sensor in which estimates the beats patterns that a conductor is suggesting through interpreting hand motions. To explore other beat detection algorithms, we also include a machine learning module that utilizes Hidden Markov Models (HMM) in order to detect the beat patterns of a conductor. An additional experiment was also conducted for future expansion of the machine learning module with Recurrent Neural Networks (rnn) and the results prove to be better than a set of HMMs. MUSE allows users to control the tempo of a virtual orchestra through basic conducting patterns used by conductors in real time. Finally, we discuss a number of ways in which our system can be used for educational and professional purposes.

Chase D. Carthen, Richard Kelley, Cris Ruggieri, Sergiu M. Dascalu, Justice Colby, Frederick C. Harris Jr.

Agile Software Testing and Development

Frontmatter
50. Using Big Data, Internet of Things, and Agile for Crises Management

This paper describes the use of Scrum Agile Method in a collaborative software project named Big Data, Internet of Things, and Agile for Accidents and Crises (BD-ITAC). It applies the Scrum agile method and its best practices, the Hadoop ecosystem, and cloud computing for the management of emergencies, involving monitoring, warning, and prevention. It reports the experience of students from three different courses on the graduate program in Electronics and Computer Engineering at the Brazilian Aeronautics Institute of Technology (Instituto Tecnologico de Aeronautica – ITA), during the first semester of 2016. The major contribution of this work is the application of an interdisciplinary Problem-Based Learning, where students have worked asynchronously and geographically dispersed to deliver valuable increments. This work was performed during four project sprints on just sixteen academic weeks. The main project output was a working, developed, and tested software. During all project, a big data environment was used, as a transparent way to fulfill the needs for alerts and crises management.

James de Castro Martins, Adriano Fonseca Mancilha Pinto, Edizon Eduardo Basseto Junior, Gildarcio Sousa Goncalves, Henrique Duarte Borges Louro, Jose Marcos Gomes, Lineu Alves Lima Filho, Luiz Henrique Ribeiro Coura da Silva, Romulo Alceu Rodrigues, Wilson Cristoni Neto, Adilson Marques da Cunha, Luiz Alberto Vieira Dias
51. An Agile and Collaborative Model-Driven Development Framework for Web Applications

Given the needs to investigate and present new solutions that combine agile modeling practices, MDD, and collaborative development for clients and developers to successfully create web applications, this paper goal is to present an Agile and Collaborative Model-Driven Development framework for web applications (AC-MDD Framework). Such framework aims to increase productivity by generating source code from models and also reducing the waste of resources on the modeling and documenting stages of a web application. To fulfill this goal, we have used new visual constructs from a new UML profile called Agile Modelling Language for Web Applications (WebAgileML) and the Web-ACMDD Method to operate the AC-MDD Framework. The methodology of this paper was successfully applied to an academic project, proving the feasibility of our new framework, method, and profile proposed.

Breno Lisi Romano, Adilson Marques da Cunha
52. Generating Control Flow Graphs from NATURAL

This work aims to generate White-box Test Cases from a mainframe NATURAL code fragment, using the Control Flow Graph Technique. Basically, it enables a code fragment analysis, generating its control flow represented by a Graph perspective. As a consequence, it provides the automatic generation of White-box Test Cases. Furthermore, this work contributes to attain a lower degree of difficult in the execution of White-box Tests within a Mainframe environment. Also, it brings a significant contribution related to the execution time, inherent to the software testing. At the end, it is added a testing expertise to NATURAL development teams. Usually there is not enough available time to test all possible paths in a algorithm, even in a simple application. Based on it, this work provides automatic generation of test cases for empowerment of team decision, regarding which test cases are more relevant to be performed.

Strauss Cunha Carvalho, Renê Esteves Maria, Leonardo Schmitt, Luiz Alberto Vieira Dias
53. Requirements Prioritization in Agile: Use of Planning Poker for Maximizing Return on Investment

Agile Methodologies are gaining popularity at lightning pace and have provided software industry a way to deliver products incrementally at a rapid pace. With an attempt to welcome changing requirements and incremental delivery, requirements prioritization becomes vital for the success of the product and thereby the organization. However, prioritizing requirements can become a nightmare for product owners and there is no easy way to create a product backlog, 1 to n list of prioritized requirements. For an organization to succeed, it is crucial to work first on requirements that are not only of high value to the customer but also require minimum cost and effort in order to maximize their Return On Investment (ROI). Agile values and principles talk about software craftsmanship and ways to write good quality code and thereby minimize introduction of any new technical debt. However, no solution is described on how to handle existing technical debt for legacy projects. To maintain a sustainable pace, technical debt needs to be managed efficiently so that teams are not bogged down. This paper talks about estimating priority using planning poker, modified Fibonacci series cards, and provides a multi-phase solution to create a product backlog in order to maximize ROI. This paper also provides a method of handling and prioritizing technical debt and the impact of non-functional requirements on technical debt prioritization. The solution proposed is then substantiated using an industrial case study.

Vaibhav Sachdeva
54. EasyTest: An Approach for Automatic Test Cases Generation from UML Activity Diagrams

The test cases generation is one of the great challenges for the Software Test Community because of the development efforts and costs to create, validate and test a large number of test cases. The automation of this process increases testing productivity and reduce labor hours. One technique that has been adopted to automate test cases generation is Model Based Testing (MBT). This paper proposes the EasyTest approach to generate test cases from UML Activity Diagrams aiming to integrate Modeling, Coding and Test stages in a software process and to reduce costs and development efforts. The proposed approach suggests an early detection of defects even in the modeling stage to prevent that unidentified defects are embedded in the coding stage. The work also presents the use of the generated test cases before and after the coding stage. To verify the proposed approach, this work also presents the EasyTest Tool to provide interoperability with the JUnit framework.

Fernando Augusto Diniz Teixeira, Glaucia Braga e Silva
55. An Integrated Academic System Prototype Using Accidents and Crises Management as PBL

This paper aims to describe the agile development of an integrated system for accidents and crises management. This academic project prototype was developed at the Brazilian Aeronautics Institute of Technology, on the second Semester of 2015. The project has involved 80 undergraduate and graduate students at the same time from four different electronic and computer engineering courses. The Scrum Framework was combined with Problem-Based Learning (PBL), to develop a prototype within just 17 academic weeks. The prototype was developed as a Proof of Concept (PoC) and applied within a natural disaster scenario management, involving the four segments of: Civil Defense, Health Care, Fire Department, and Police Department. At the end of the project, it was possible to deliver an integrated academic system project prototype, associating a Control Room with Web Applications connected through Cockpit Display Systems (CDSs). Students were able to work geographically dispersed, using free cloud-based tools, and the Safety Critical Application Development Environment (SCADE), from ®;ANSYS Esterel Technologies, combining multiple types of hardware like Raspberry Pi and Arduino, and different sets of open-source tools.

Lais S. Siles, Mayara V. M. Santos, Romulo A. Rodrigues, Lineu A. L. Filho, João P. T. Siles, Renê Esteves Maria, Johnny C. Marques, Luiz A. V. Dias, Adilson M. da Cunha
56. Agile Testing Quadrants on Problem-Based Learning Involving Agile Development, Big Data, and Cloud Computing

This paper describes the use of Agile Testing Quadrants, using Scrum Agile Method in a collaborative software project named Big Data, Internet of Things, and Agile for Accidents and Crises (BD-ITAC). It applies the Scrum agile method and its best practices, the Hadoop ecosystem, and cloud computing for the management of emergencies, involving monitoring, warning, and prevention. It reports the experience of students, during the first semester of 2016, from three different courses on the graduate program in Electronics and Computer Engineering at the Brazilian Aeronautics Institute of Technology (Instituto Tecnologico de Aeronautica - ITA). The major contribution of this work is the academic application of Agile Testing Quadrants for Problem-Based Learning. The students worked asynchronous and geographically dispersed to deliver valuable increments. This work was performed during four project Sprints on just sixteen academic weeks. The main project output was a working, developed, and tested software.

James de Castro Martins, Adriano Fonseca Mancilha Pinto, Gildarcio Sousa Goncalves, Rafael Augusto Lopes Shigemura, Wilson Cristoni Neto, Adilson Marques da Cunha, Luiz Alberto Vieira Dias
57. Integrating NoSQL, Relational Database, and the Hadoop Ecosystem in an Interdisciplinary Project involving Big Data and Credit Card Transactions

The project entitled as Big Data, Internet of Things, and Mobile Devices, in Portuguese Banco de Dados, Internet das Coisas e Dispositivos Moveis (BDIC-DM) was implemented at the Brazilian Aeronautics Institute of Technology (ITA) on the 1st Semester of 2015. It involved 60 graduate students within just 17 academic weeks. As a starting point for some features of real time Online Transactional Processing (OLTP) system, the Relational Database Management System (RDBMS) MySQL was used along with the NoSQL Cassandra to store transaction data generated from web portal and mobile applications. Considering batch data analysis, the Apache Hadoop Ecosystem was used for Online Analytical Processing (OLAP). The infrastructure based on the Apache Sqoop tool has allowed exporting data from the relational database MySQL to the Hadoop File System (HDFS), while Python scripts were used to export transaction data from the NoSQL database to the HDFS. The main objective of the BDIC-DM project was to implement an e-Commerce prototype system to manage credit card transactions, involving large volumes of data, by using different technologies. The used tools involved generation, storage, and consumption of Big Data. This paper describes the process of integrating NoSQL and relational database with Hadoop Cluster, during an academic project using the Scrum Agile Method. At the end, processing time significantly decreased, by using appropriate tools and available data. For future work, it is suggested the investigation of other tools and datasets.

Romulo Alceu Rodrigues, Lineu Alves Lima Filho, Gildarcio Sousa Gonçalves, Lineu F. S. Mialaret, Adilson Marques da Cunha, Luiz Alberto Vieira Dias
58. Enhancing Range Analysis in Software Design Models by Detecting Floating-Point Absorption and Cancellation

Floating-point subtleties are often overlooked by software developers, but they can have considerable impact over critical systems which make extensive manipulations of real numbers. This work presents a method for detecting floating-point absorption and cancellation when performing variable range analysis in design models. Employing this method as early as in the design phase permits cheaper detection and treatment of such floating-point anomalies. Our method works by analyzing sums and subtractions in a model and verifying if, given two variable ranges, there are sub-ranges of them on which absorption and cancellation can occur. We also provide the number of canceled bits for each sub-range in order to permit better assessment of the impact of each detected cancellation. We implemented our method in a range analysis tool that operates over SCADE models: results are presented in an HTML report and cancellations are depicted in graphs. This method can be used for early design analysis or as a basis for more complex, end-to-end numerical precision verification.

Marcus Kimura Lopes, Ricardo Bedin França, Luiz Alberto Vieira Dias, Adilson Marques da Cunha
59. Requirements Prioritization Using Hierarchical Dependencies

Software development environments are susceptible to changes in requirements due to the abrupt updates in market needs. This imposes a burden on the software development team to adhere to these new changes by prioritizing them based on their significance to the project. However, prioritization of the list of requirements is not a spontaneous task as it involves several attributes such as requirements importance, complexity, cost, and completion time. Stakeholder requirements are usually interrelated where they may contribute to the same use cases and the same quality attributes that are identified during the specifications phase. Thus, considering the relationships among requirements at the different specification levels should be investigated in the prioritization process. In this paper, we propose a new approach for requirements prioritization using the relationships between the stakeholder requirements and their derived specifications in the form of use cases and non-functional requirements.

Luay Alawneh

Data Mining

Frontmatter
60. An Optimized Data Mining Method to Support Solar Flare Forecast

Historical Solar X-rays time series are employed to track solar activity and solar flares. High level of X-rays released during Solar Flares can interfere in telecommunication equipment operation. In this sense, it is important the development of computational methods to forecast Solar Flares analyzing the X-ray emissions. In this work, historical Solar X-rays time series sequences are employed to predict future Solar Flares using traditional classification algorithms. However, for large data sequences, the classification algorithms face the problem of “dimensionality curse”, where the algorithms performance and accuracy degrade with the increase in the sequence size. To deal with this problem, we proposed a method that employs feature selection to determine which time instants of a sequence should be considered by the mining process, reducing the processing time and increasing the accuracy of the mining process. Moreover, the proposed method also determines which are the antecedent time instants that most affect a future Solar Flare.

Sérgio Luisir Díscola Junior, José Roberto Cecatto, Márcio Merino Fernandes, Marcela Xavier Ribeiro
61. A New Approach to Classify Sugarcane Fields Based on Association Rules

In order to corroborate the acquired knowledge of the human expert with the use of computational systems in the context of agrocomputing, this work presents a novel classification method for mining agrometeorological remote sensing data and its implementation to identify sugarcane fields, by analyzing Normalized Difference Vegetation Index (NDVI) series. The proposed method, called RAMiner (R ule-based A ssociative classifier Miner ) creates a learning model from sets of mined association rules and employs the rules to constructs an associative classifier. RAMiner was proposed to deal with low spatial resolution image datasets, provided by two sensors/satellites (AVHRR/NOAA and MODIS/Terra). The proposal employs a two-ways classification step for the new data: Considers the conviction value and the conviction-based probability (a weighted accuracy formulated in this work). The results given were compared with others delivered by well-known classifiers, such as C4.5, zeroR, OneR, Naive Bayes, Random Forest and Support Vector Machine (SVM). RAMiner presented the highest accuracy (83.4%), attesting it is well-suited to mine remote sensing data.

Rafael S. João, Steve T. A. Mpinda, Ana P. B. Vieira, Renato S. João, Luciana A. S. Romani, Marcela X. Ribeiro
62. Visualizing the Document Pre-processing Effects in Text Mining Process

Text mining is an important step to categorize textual data by using data mining techniques. As most obtained textual data is unstructured, it needs to be processed before applying mining algorithms – that process is known as pre-processing step in overall text mining process. Pre-processing step has important impact on mining. This paper aims at providing detailed analysis of the document pre-processing when employing multidimensional projection techniques to generate graphical representations of vector space models, which are computed from eight combinations of three steps: stemming, term weighting and term elimination based on low frequency cut. Experiments were made to show that the visual approach is useful to perceive the processing effects on document similarities and group formation (i.e., cohesion and separation). Additionally, quality measures were computed from graphical representations and compared with classification rates of a k-Nearest Neighbor and Naive Bayes classifiers, where the results highlights the importance of the pre-processing step in text mining.

Danilo Medeiros Eler, Ives Renê Venturini Pola, Rogério Eduardo Garcia, Jaqueline Batista Martins Teixeira
63. Complex-Network Tools to Understand the Behavior of Criminality in Urban Areas

Complex networks are nowadays employed in several applications. Modeling urban street networks is one of them, and in particular to analyze criminal aspects of a city. Several research groups have focused on such application, but until now, there is a lack of a well-defined methodology for employing complex networks in a whole crime analysis process, i.e. from data preparation to a deep analysis of criminal communities. Furthermore, the “toolset” available for those works is not complete enough, also lacking techniques to maintain up-to-date, complete crime datasets and proper assessment measures. In this sense, we propose a threefold methodology for employing complex networks in the detection of highly criminal areas within a city. Our methodology comprises three tasks: (i) Mapping of Urban Crimes; (ii) Criminal Community Identification; and (iii) Crime Analysis. Moreover, it provides a proper set of assessment measures for analyzing intrinsic criminality of communities, especially when considering different crime types. We show our methodology by applying it to a real crime dataset from the city of San Francisco—CA, USA. The results confirm its effectiveness to identify and analyze high criminality areas within a city. Hence, our contributions provide a basis for further developments on complex networks applied to crime analysis.

Gabriel Spadon, Lucas C. Scabora, Marcus V. S. Araujo, Paulo H. Oliveir, Bruno B. Machado, Elaine P. M. Sousa, Caetano Traina, Jose F. Rodrigues
64. Big Data: A Systematic Review

Big Data has been gathering importance in the last few years, specially through bigger data generation, and, consequently, more accessible storage of said data. They are originated at social media or sensors, for example, and are stored to be transformed in useful information. The use of Big Data is becoming more common in several fields of business, mainly because it is a source of competitive differential, through the analysis of the stored data. This study has the objective of executing a systematic review towards presenting a broad vision of Big Data. Were analyzed 466 publications from 2005 until March 2016.

Antonio Fernando Cruz Santos, Ítalo Pereira Teles, Otávio Manoel Pereira Siqueira, Adicinéia Aparecida de Oliveira
65. Evidences from the Literature on Database Migration to the Cloud

Context: The cloud computing paradigm has received increasing attention because of its claimed financial and functional benefits. A number of competing providers can support organizations to access computing services without owning the corresponding infrastructure. However, the migration of legacy system from the database perspective is not a trivial task. Goal: Characterize reports from the literature addressing the migration of legacy systems to the cloud with emphasis on database issues. Method: The characterization followed a four-phase approach having as a start point the selection of papers published in conferences and journals. Results: The overall data collected from the papers depicts that there are six main reported strategies to migrate databases to the cloud and twelve reported issues related to this migration. Conclusion: We expect that the strategies, approaches and tools reported in the primary papers can contribute to lessons learnt regarding how companies should face the migration of their legacy systems databases to the cloud.

Antonio Carlos Marcelino de Paula, Glauco de Figueiredo Carneiro, Antonio Cesar Brandao Gomes da Silva
66. Improving Data Quality Through Deep Learning and Statistical Models

Traditional data quality control methods are based on users’ experience or previously established business rules, and this limits performance in addition to being a very time consuming process with lower than desirable accuracy. Utilizing deep learning, we can leverage computing resources and advanced techniques to overcome these challenges and provide greater value to users.In this paper, we, the authors, first review relevant works and discuss machine learning techniques, tools, and statistical quality models. Second, we offer a creative data quality framework based on deep learning and statistical model algorithm for identifying data quality. Third, we use data involving salary levels from an open dataset published by the state of Arkansas to demonstrate how to identify outlier data and how to improve data quality via deep learning. Finally, we discuss future work.

Wei Dai, Kenji Yoshigoe, William Parsley
67. A Framework for Auditing XBRL Documents Based on the GRI Sustainability Guidelines

The adoption of XBRL by the Global Report Initiative (GRI) in the disclosure of sustainability reports contributes to the increase of their quality; however, the heterogeneity of enterprise information systems and the adoption of efficient audit processes are potential obstacles to the use of such reports in the corporate setting. Although the adoption of XBRL represents an improvement for the analysis and use of sustainability data, it lacks an integration framework that allows users to benefit from the standardization proposed by the GRI. In an attempt to solve system and data integration issues in this heterogeneous setting and also to improve the efficiency of sustainability report audit, a service framework is proposed. The framework consists of a process model for the generation and analysis of sustainability reports, an architecture to structure the information environment of organizations, operators for analytical processing (OLAP) data sustainability and an analysis of sustainability data.

Daniela Costa Souza, Paulo Caetano da Silva

Software Engineering

Frontmatter
68. Mining Historical Information to Study Bug Fixes

Software is present in almost all economic activity, and is boosting economic growth from many perspectives. At the same time, like any other man-made artifacts, software suffers from various bugs which lead to incorrect results, deadlocks, or even crashes of the entire system. Several approaches have been proposed to aid debugging. An interesting recent research direction is automatic program repair, which achieves promising results towards the reduction of costs associated with defect repair in software maintenance. The identification of common bug fix patterns is important to generate program patches automatically. In this paper, we conduct an empirical study with more than 4 million bug fixing commits distributed among 101,471 Java projects hosted on GitHub. We used a domain-specific programming language called Boa to analyze ultra-large-scale data efficiently. With Boa’s support, we automatically detect the prevalence of the 5 most common bug fix patterns (identified in the work of Pan et al.) in those bug fixing commits.

Eduardo C. Campos, Marcelo A. Maia
69. An Empirical Study of Control Flow Graphs for Unit Testing

This paper conducts an empirical study of a Control Flow Graphs (CFG) visualizer from which various test coverages can be exercised directly. First, we demonstrate how control structures are extracted from bytecode and compound conditions are decomposed into simple multi-level conditions in Java bytecode. Then, we visualize the decomposed compound conditions in CFG. The layout of a CFG is calculated by extending a force-directed drawing algorithm. Each control node of CFG represents a simple condition. The empirical study shows that (1) the tool successfully decomposes a compound condition into simple multi-level conditions and (2) the extended force-based layout algorithm produces the best layout for visualizing CFG.

Weifeng Xu, Omar El Ariss, Yunkai Liu
70. A New Approach to Evaluate the Complexity Function of Algorithms Based on Simulations of Hierarchical Colored Petri Net Models

This paper proposes a new approach to estimate the complexity function of algorithms based on automatic simulations of formal models. In order to cope with this purpose, it is necessary to count on specification techniques that, in addition to modeling the control flow of algorithms, allow performing complete simulations of the generated models for diverse scenarios. In this way, this work uses Hierarchical Colored Petri Nets to model the algorithms to be analyzed. Further, it uses the resources of CPN tools to create a set of control functions that will make possible to simulate the generated models automatically, in such a way that these simulations correspond to the real execution of the algorithms, even for non uniform data. The complexity function of the algorithms will be retrieved from these simulations. With the aim of validating this new approach, the search algorithm Minimax operating with non uniform data is used as a case study. The technical motivation for this choice is that the dynamic control flow inherent to Minimax (caused by the presence of numerous deviation instructions) is very appropriate to test the correct behavior of the control functions that direct the automatic simulations of the models. Further, the application of the algorithm to non uniform data forces the creation of additional control functions that enable the models to also handle this kind of data. The results obtained confirm the correctness of the approach proposed.

Clarimundo M. Moraes Júnior, Rita Maria S. Julia, Stéphane Julia, Luciane de F. Silva
71. Detection Strategies for Modularity Anomalies: An Evaluation with Software Product Lines

A Software Product Line (SPL) is a configurable set of systems that share common and varying features. SPL requires a satisfactory code modularity for effective use. Therefore, modularity anomalies make software reuse difficult. By detecting and solving an anomaly, we may increase the software quality and ease reuse. Different detection strategies support the identification of modularity anomalies. However, we lack an investigation of their effectiveness in the SPL context. In this paper, after an evaluation of existing strategies, we compared four strategies from the literature for two modularity anomalies that affect SPLs: God Class and God Method. In addition, we proposed two novel detection strategies and compared them with the existing ones, using three SPLs. As a result, existing strategies showed high recall but low precision. In addition, when compared to detection strategies from the literature, our strategies presented comparable or higher recall and precision rates for some SPLs.

Eduardo Fernandes, Priscila Souza, Kecia Ferreira, Mariza Bigonha, Eduardo Figueiredo
72. Randomized Event Sequence Generation Strategies for Automated Testing of Android Apps

Mobile apps are often tested with automatically generated sequences of Graphical User Interface (GUI) events. Dynamic GUI testing algorithms construct event sequences by selecting and executing events from GUI states at runtime. The event selection strategy used in a dynamic GUI testing algorithm may directly influence the quality of the test suites it produces. Existing algorithms use a uniform probability distribution to randomly select events from each GUI state and they are often not directly applicable to mobile apps. In this paper, we develop a randomized algorithm to dynamically construct test suites with event sequences for Android apps. We develop two frequency-based event selection strategies as alternatives to uniform random event selection. Our event selection algorithms construct event sequences by dynamically altering event selection probabilities based on the prior selection frequency of events in each GUI state. We compare the frequency-based strategies to uniform random selection across nine Android apps. The results of our experiments show that the frequency-based event selection strategies tend to produce test suites that achieve better code coverage and fault detection than test suites constructed with uniform random event selection.

David Adamo, Renée Bryce, Tariq M. King
73. Requirement Verification in SOA Models Based on Interorganizational WorkFlow Nets and Linear Logic

This paper presents a method for requirement verification in Service-Oriented Architecture (SOA) models based on Interorganizational WorkFlow nets. In SOA Design, a requirement model (public model) only specify tasks which are of interest of all parties involved in the corresponding interorganizational architectural model (a set of interacting private models). Architectural models involve much more tasks: they contain the detailed tasks of all the private processes (individual workflow processes) that interact through asynchronous communication mechanisms in order to produce the services specified in the requirement model. In the proposed approach, services correspond to scenarios of Interorganizational WorkFlow nets. For each scenario of the public and private models, a proof tree of Linear Logic is produced and transform into a precedence graph that specifies task sequence requirements. Precedence graphs of the public and private models are then compared in order to verify if all the existing scenario of the requirement model also exist in the architectural model. The comparison of the models (public to private) is based on the notion of branching bisimilarity that prove behavioral equivalence between distinct finite automatas.

Kênia Santos de Oliveira, Stéphane Julia, Vinícius Ferreira de Oliveira
74. Cambuci: A Service-Oriented Reference Architecture for Software Asset Repositories

Reuse of assets results in faster execution of a software project. Considering the importance of repositories to support reuse of assets, we highlight the benefits of using Reference Architecture (RA) to facilitate the development of repositories. Reference architectures of repositories found in the literature are specific to a particular type of asset or represent only some functionality, and they do not fully meet the expected results of the Reuse Asset Management Process of ISO/IEC 12207. This paper presents a service-oriented reference architecture for software asset repositories, named Cambuci. For this, the systematic process ProSA-RA for supporting definition of reference architectures is used. In order to investigate the quality of the Cambuci’s description, we conducted two evaluations. The first based on a checklist and the second based on one instantiation of this RA. From the results obtained, we observed improvement opportunities in the description of Cambuci and the support offered by it in the development of repositories.

Márcio Osshiro, Elisa Y. Nakagawa, Débora M. B. Paiva, Geraldo Landre, Edilson Palma, Maria Istela Cagnin
75. Extending Automotive Legacy Systems with Existing End-to-End Timing Constraints

Developing automotive software is becoming increasingly challenging due to continuous increase in its size and complexity. The development challenge is amplified when the industrial requirements dictate extensions to the legacy (previously developed) automotive software while requiring to meet the existing timing requirements. To cope with these challenges, sufficient techniques and tooling to support the modeling and timing analysis of such systems at earlier development phases is needed. Within this context, we focus on the extension of software component chains in the software architectures of automotive legacy systems. Selecting the sampling frequency, i.e. period, for newly added software components is crucial to meet the timing requirements of the chains. The challenges in selecting periods are identified. It is further shown how to automatically assign periods to software components, such that the end-to-end timing requirements are met while the runtime overhead is minimized. An industrial case study is presented that demonstrates the applicability of the proposed solution to industrial problems.

Matthias Becker, Saad Mubeen, Moris Behnam, Thomas Nolte
76. Modeling of Vehicular Distributed Embedded Systems: Transition from Single-Core to Multi-core

Model- and component-based software development has emerged as an attractive option for the development of vehicle software on single-core platforms. There are many challenges that are encountered when the existing component models, that are originally designed for the software development of vehicular distributed single-core embedded systems, are extended for the software development on multi-core platforms. This paper targets the challenge of extending the structural hierarchies in the existing component models to enable the software development on multi-core platforms. The proposed extensions ensure backward compatibility of the component models to support the software development of legacy single-core systems. Moreover, the proposed extensions also anticipate forward compatibility of the component models to the future many-core platforms.

Saad Mubeen, Alessio Bucaioni
77. On the Impact of Product Quality Attributes on Open Source Project Evolution

Context: Several Open Source Software (OSS) projects have adopted frequent releases as a strategy to deliver both new features and fixed bugs on time. This cycle begins with express requests from the project’s community, registered as issues in bug repositories by active users and developers. Each OSS project has its own priorities established by their respective communities. A a still open question is the set of criteria and priorities that influence the decisions of which issues should be analyzed, implemented/solved and delivered in next releases. In this paper, we present an exploratory study whose goal is to investigate the influence of target product quality attributes in software evolution practices of OSS projects. The goal is to search for evidence of relationships between these target attributes, priorities assigned to the registered issues and the ways they are delivered by product releases. To this end, we asked six participants of an exploratory study to identify these attributes through the data analysis of repositories of three well-known OSS projects: Libre Office, Eclipse and Mozilla Firefox. Evidence indicated by the participants suggest that OSS community developers use criteria/priorities driven by specific software product quality attributes, to plan and integrate software releases.

António César B. Gomes da Silva, Glauco de Figueiredo Carneiro, Miguel Pessoa Monteiro, Fernando Brito e Abreu, Kattiana Constantino, Eduardo Figueiredo
78. AD-Reputation: A Reputation-Based Approach to Support Effort Estimation

Estimating the effort on software maintenance activities is a complex task. When inaccurately accomplished, effort estimation can reduce the quality and hinder software delivery. In a scenario, in which the maintenance and evolution activities are geographically distributed, collaboration is a key issue to estimate and meet deadlines. In this vein, dealing with reputation of developers, as well as establish and promote trust among them, are factors that affect collaboration activities. This paper presents an approach aimed to support effort estimation on collaborative maintenance and evolution activities. It encompasses a model for reputation calculation, visualization elements and the integration with change request repositories. Through an experimental study, quantitative and qualitative data was collected. A statistical analysis was applied and shown that the AD-Reputation is feasible to estimate the effort spent on collaborative maintenance activities.

Cláudio A. S. Lélis, Marcos A. Miguel, Marco Antônio P. Araújo, José Maria N. David, Regina Braga
79. Pmbench: A Micro-Benchmark for Profiling Paging Performance on a System with Low-Latency SSDs

Modern non-volatile memory storage devices operate significantly faster than traditional rotating disk media. Disk paging, though never intended for use as an active memory displacement scheme, may be viable as a cost-efficient cache between main memory and sufficiently fast secondary storage. However, existing benchmarks are not designed to accurately measure the microsecond-level latencies at which next-generation storage devices are expected to perform. Furthermore, full exploitation of disk paging to fast storage media will require considerations in the design of operating system paging algorithms. This paper presents pmbench – a multiplatform synthetic micro-benchmark that profiles system paging characteristics by accurately measuring the latency of paging-related memory access operations. Also presented are sample pmbench results on Linux and Windows using a consumer NAND-based SSD and a prototype low-latency SSD as swap devices. These results implicate operating system-induced software overhead as a major bottleneck for system paging, which intensifies as SSD latencies decrease.

Jisoo Yang, Julian Seymour
80. Design Observer: A Framework to Monitor Design Evolution

This paper presents a framework, named DesignObserver, to automatically monitor and track design changes during software evolution. The framework helps in preserving code-design consistency during incremental maintenance activities. The design model is automatically updated based on implemented code changes. Preserving design quality is another important feature of the framework. Design changes are analyzed to determine their violations for pre-defined quality and pattern constraints. Any design change that breaks a design pattern or violates a quality metric is identified and highlighted. The framework also measures the cost of some potential design changes to evaluate their impact on other classes. Finally, designers and their contributions are also identified and reported by the framework to support assigning maintenance tasks. A set of tools that we have previously developed are mainly used to build DesignObserver.

Maen Hammad
81. Generating Sequence Diagram and Call Graph Using Source Code Instrumentation

Understanding the dynamic behavior of the source code is an important key in software comprehension and maintenance. This paper presents a reverse engineering approach to build UML sequence diagram and call graph by monitoring the program execution. The generated models show the dynamic behavior of a set of target methods with time and object creation information. Timing and dynamic behavior details are extracted by instrumenting the target code with a set of calls to a monitoring function in specific instrumentation points in the source code. The proposed approach is applied on a case study to show the effectiveness and the benefits of the generated models.

Mustafa Hammad, Muna Al-Hawawreh

High Performance Computing Architectures

Frontmatter
82. Data Retrieval and Parsing of Form 4 from the Edgar System using Multiple CPUs

In this paper we present a parallel system that retrieves and parses Form 4 documents from the Securities and Exchange Commission’s Electronic Data Gathering, Analysis and Retrieval database (EDGAR). This information is very important for investors looking at insider trading information to make investment decisions. However, the information’s usefulness is inversely related to the time it takes to retrieve and analyze the information. A sequential system is slow due to the latency associated with the retrieval and parsing of the Form 4s, which on average exceeds 1000 per day. By making the retrieval and parsing of Form 4s parallel we were able to attain the max speed up of 20x, resulting in parsing of a daily index with 1000 forms in under 30 minutes instead of 9 hours it takes utilizing a single processor.

Raja H. Singh, Nolan Burfield, Frederick Harris Jr.
83. A New Approach for STEM Teacher Scholarship Implementation

This research introduces an innovative approach for STEM teacher scholarship implementation. The purpose of this research is to introduce a new STEM teacher implementation model. This model will help recruit high-quality Noyce Scholars and retain and train them as effective STEM teachers. In this paper, we share the actual implementation experience of this strategy in the past three years for an existing STEM teacher education project funded by the National Science Foundation (NSF). In addition, we prove that this strategy is effective with two types of evidence. First, we use the survey data collected from over twenty-five STEM teacher candidates. Secondly, we report the actual interview data and student feedback and share the experiences learned from the NSF Noyce project.

Fangyang Shen, Janine Roccosalvo, Jun Zhang, Yang Yi, Yanqing Ji
84. Improving the Performance of the CamShift Algorithm Using Dynamic Parallelism on GPU

The CamShift algorithm is widely used for tracking dynamically sized and positioned objects that appear in a sequence of video pictures captured by a camera. In spite of a great number of literatures regarding CamShift on the platform of CPU, its research on the massively parallel Graphics Processing Unit(GPU) platform is quite limited, where a GPU device is an emerging technology for high-performance computing. In this work, we improve the existing work by utilizing a new strategy – Dynamic Parallelism (DP), which helps to minimize the communication cost between a GPU device and the CPU. As far as we know, our project is the first proposal to utilize DP on a GPU device to further improve the CamShift algorithm. In experiments, we verify that our design is up to three times faster than the existing work due to applying DP, while we achieve the same tracking accuracy. These improvements allow the CamShift algorithm to be used in a more performance-demanding environment, for example, in real-time video processing with high-speed cameras or in processing videos with high resolution.

Yun Tian, Carol Taylor, Yanqing Ji
85. Sumudu Transform for Automatic Mathematical Proof and Identity Generation

Laplace Transform is among a few of these transforms playing important roles in pure and applied mathematics. Sumudu Transform is relatively new but has many good properties for solving problems in computational science. In this work, the authors introduce Sumudu Transform in computational approach which leads to various interesting and useful applications. Traditionally, the invention of mathematical formulae and proof of mathematical identities belong to the intelligent human beings. But in this work, we shall show that Sumudu Transform can be used to prove existing formulae and generate new mathematical identities automatically without embellishment. To show how it works, a good number of sample formulae and identities are provided for demonstration, including some famous ones such as Euler’s Formula, de Moivre’s Identity and Pythagorean Identity. Not only the work presented here is straightforward to introduce to students but also can be useful to assist new research works related to computational applications!

Jun Zhang, Fangyang Shen
86. A Multiobjective Optimization Method for the SOC Test Time, TAM, and Power Optimization Using a Strength Pareto Evolutionary Algorithm

System-On-Chip (SOCs) test minimization is an important problem that has been receiving considerable attention. The problem is tightly coupled with the number of TAM bits, power, and wrapper design. This paper presents a multiobjective optimization approach for the SOC test scheduling problem. The method uses a Strength Pareto Evolutionary Algorithm that minimizes the overall test application time in addition to power, wrapper design and TAM assignment. We present various experimental results that demonstrate the effectiveness of our method.

Wissam Marrouche, Rana Farah, Haidar M. Harmanani

Computer Vision, HCI and Image Processing/Analysis

Frontmatter
87. Handwritten Feature Descriptor Methods Applied to Fruit Classification

Several works have presented distinct ways to compute feature descriptor from different applications and domains. A main issue in Computer Vision systems is how to choose the best descriptor for specific domains. Usually, Computer Vision experts try several combination of descriptor until reach a good result of classification, clustering or retrieving – for instance, the best descriptor is that capable of discriminating the dataset images and reach high correct classification rates. In this paper, we used feature descriptors commonly applied in handwritten images to improve the image classification from fruit datasets. We present distinct combinations of Zoning and Character-Edge Distance methods to generate feature descriptor from fruits. The combination of these two descriptor with Discrete Fourier Transform led us to a new approach for acquire features from fruit images. In the experiments, the new approaches are compared with the main descriptors presented in the literature and our best approach of feature descriptors reaches a correct classification rate of 97.5%. Additionally, we also show how to perform a detailed inspection in feature spaces through an image visualization technique based on a similarity trees known as Neigbor Joining (NJ).

Priscila Alves Macanhã, Danilo Medeiros Eler, Rogério Eduardo Garcia, Wilson Estécio Marcílio Junior
88. A No-Reference Quality Assessment Method of Color Image Based on Visual Characteristic

With the advent of the information age, the image plays an increasingly important role in people’s daily work and lives. People’s requirements of image quality are increasing. In order to make the assessment result of image quality more in line with person’s subjective feeling, a improved no-reference quality assessment method of color image was presented. Firstly, divide the image into several blocks and calculate contrast of each block under the Lab color model, then take each block’s effect on the whole visual perception with masking effect into consideration, optimize the whole assessment result of image quality, to achieve the objective and subjective consistency. Using the images of several image databases and the given subjective scores of the images to do the experiment. The experimental results show that the presented method has a good consistency with subjective feelings of image quality, and it’s assessment result is more accurately compare to other methods.

Ling Dong, Xianqiao Chen
89. Approaches for Generating Empathy: A Systematic Mapping

Empathy plays an important role in social interactions, such an effective teaching-learning process in a teacher-student relationship, and company-client or employee-customer relationship to retain potential clients and provide them with greater satisfaction. Increasingly, people are using technology to support their interactions, especially when the interlocutors are geographically distant from one another. This has a negative impact on the empathic capacity of individuals. In the Computer Science, there are different approaches, techniques and mechanisms to promote empathy in social or human-computer interactions. Therefore, this article presents a systematic mapping to identify and systematize the approaches, techniques and mechanisms used in computing to promote empathy. As a result, we have identified existing approaches (e.g. collaborative learning environment, virtual and robotics agents, and collaborative/affective games) to promote empathy, the main areas involved (e.g. human-computer interaction, artificial intelligence, robotics, and collaborative systems), the top researchers and their affiliations who are potential contributors to future research and, finally, the growth status of this line of research.

Breno Santana Santos, Methanias Colaço Júnior, Maria Augusta S. N. Nunes
90. Multi-camera Occlusion and Sudden-Appearance-Change Detection Using Hidden Markovian Chains

In this paper, a new object tracking algorithm using multiple cameras for surveillance applications is proposed. The proposed algorithm is for detecting sudden-appearance-changes and occlusions. We use a hidden Markovian statistical model, where the random events of sudden-appearance-changes and occlusions are the hidden variables. The tracking algorithm uses both a discriminative model and a generative model for the being-tracked object. The prediction errors in the generative model are used as the observed random variables in the hidden Markovian model. We assume that the prediction errors are exponentially distributed, when no sudden-appearance-changes and occlusion occurs. And the prediction errors are assumed uniformly distributed, when such random events occur. Almost all state-of-the-art discriminative model based object tracking algorithms need to update the discriminative models on-line and thus suffer a so called drifting problem. We show in this paper that the obtained sudden-appearance-changes and occlusion estimations can be used to alleviate such drifting problems. Finally, we show some experimental results that our algorithm detects the sudden-appearance changes and occlusions reliably and can be used for alleviating the drifting problems.

Xudong Ma
91. Store Separation Analysis Using Image Processing Techniques

Store separation flight tests are considered as high-risk activity. These tests are performed to determine the position and attitude of the store after it is deliberately separated or ejected while it is still under the aircraft’s area of interference. A process that added value for experimental tests is the photogrammetry. This process extracts data from the cameras, using computational resources that analyze the frames of the videos. Few institutions in the world have an ability to perform this kind of activity. In Brazil, the Flight Tests and Research Institute (IPEV) is one of them. At IPEV, the determination of the store separation trajectory is performed after the flight using a commercial tool. In addition, the process is very inefficient and costly because the activity demands for many weeks of work to analyze the results. Thus, developing a solution to determine the store separation trajectory will increase the efficiency and safety of the flight test campaign. The benefits of such a solution include better use of resources, minimization of workload, and reduced costs and time. In this work, we demonstrate the steps for developing the application that uses a synthetic scenario. The use of a synthetic scenario allows the simulation of different separation scenarios and thus, the application becomes more robust. The experiments performed to validate the application are also demonstrated.

Luiz Eduardo Guarino de Vasconcelos, Nelson Paiva O. Leite, André Yoshimi Kusumoto, Cristina Moniz Araújo Lopes
92. Ballistic Impact Analysis Using Image Processing Techniques

Materials for ballistic protection are developed with a life-saving purpose and have resistive ability to withstand impacts without damaging the body they are protecting. The development of new materials involves the analysis of deformations occurring after impact. Traditional processes, which use plasticine, strain gages, extensometers, linear variable differential transducers to carry out the measurements, are laborious and inefficient. Few institutions in the world have the ability to develop new ballistic materials. In Brazil, the Aeronautics and Space Institute (IAE), develops new materials using the traditional process through plasticine. Thus, the development of a solution to determine the deformations and their characterizations (diameter, height), can increase the efficiency of this type of test. In this work, the steps for the development of an application that uses image processing techniques to determine the diameter of the deformations are shown. The experiments performed and the results obtained are also shown.

Luiz Eduardo Guarino de Vasconcelos, Nelson Paiva O. Leite, Cristina Moniz A. Lopes
93. iHelp HEMOCS Application for Helping Disabled People Communicate by Head Movement Control

The combination of a Head-mounted display (HMDs) with mobile devices, provide an innovation of new low cost of human-computer interaction. Such devices are hands-free systems. In this paper, we introduced a proof of concept of our method on recognizing head movement as the controller of mobile application and proposed a new way of elderly people or disable people communicate with others. The implementation of an iHelp application on an iOS devices, shows that the proposed method is appropriate as a real-time human-computer interaction with head movement control only for communicating user need to others through mobile application.

Herman Tolle, Kohei Arai

Signal Processing, UAVs

Frontmatter
94. Evaluation of Audio Denoising Algorithms for Application of Unmanned Aerial Vehicles in Wildlife Monitoring

Unmanned Aerial Vehicles (UAVs) have become popular alternative for wildlife monitoring and border surveillance applications. Elimination of the UAV’s background noise for effective classification of the target audio signal is still a major challenge due to background noise of the vehicles and environments and distances to signal sources. The main goal of this work is to explore acoustic denoising algorithms for effective UAV’s background noise removal. Existing denoising algorithms, such as Adaptive Least Mean Square (LMS), Wavelet Denoising, Time-Frequency Block Thresholding, and Wiener Filter, were implemented and their performance evaluated. LMS and DWT algorithms were implemented on a DSP board and their performance compared using software simulations. Experimental results showed that LMS algorithm’s performance is robust compared to other denoising algorithms. Also, required SNR gain for effective classification of the denosied audio signal is demonstrated.

Yun Long Lan, Ahmed Sony Kamal, Carlo Lopez-Tello, Ali Pour Yazdanpanah, Emma E. Regentova, Venkatesan Muthukumar
95. Development of a Portable, Low-Cost System for Ground Control Station for Drones

This paper aims the development of a portable, low-cost software and hardware environment to control the flight of a drone (aircraft up to 5 kg). The project uses the Android operating system of a tablet, what gives the user the benefits of the mobility, ergonomics and ease of field work that are characteristics of this device. The project focuses on portability. To achieve that, the chosen components are generally lighter and easier to transport rather than those traditionally used in a ground control station (GCS). In addition, the components and tools that were used in the project are low-cost, what increases the feasibility of its implementation. The RPA (Remotely Piloted Aircraft) flight control is performed through the communication between the Paparazzi UAV system and the PPRZonDroid application. It is convenient to say that the Paparazzi UAV and the PPRZonDroid application must be previously installed in a Raspberry Pi board and in a tablet, respectively. The use of projected ground control station has a wide area of utilization, since it can be used in various scenarios, both military and civilian.

Douglas Aparecido Soares, Alexandre Carlos Brandão Ramos, Roberto Affonso da Costa Junior
96. Collision Avoidance Based on Reynolds Rules: A Case Study Using Quadrotors

This work aims to present a collision avoidance algorithm developed to drive a swarm of Unmanned Aerial Vehicles (UAVs) using Reynolds flocking rules. We used small quadrotor aircrafts with 250 mm diameter, equipped with GPS and distance sensors, controlled by a Pixhawk autopilot board and an embedded Linux computer (Raspberry Pi). The control algorithm was implemented in C++ as a package for the ROS platform and runs on the Raspberry Pi, while the Pixhawk is responsible for the low level control of the aircraft. The distance sensors are used to detect nearby robots and based on this information the control algorithm determines the direction the robot should move to avoid collisions with its neighbors. We are able to see that from the individual perception and local interaction of each robot a group behavior emerges and the swarm move together coherently. A simulation environment based on the Gazebo Simulator was prepared to test and evaluate the algorithm in a way as close to the reality as possible.

Rafael G. Braga, Roberto C. da Silva, Alexandre C. B. Ramos, Felix Mora-Camino
97. PID Speed Control of a DC Motor Using Particle Swarm Optimization

This research is based on applying Particle Swarm Optimization (PSO) to the Proportional-Integral-Derivative (PID) speed control of a permanent magnet DC motor. The integration of PID control and PSO optimization technique starts by designing the initial PID control gains so that the DC motor angular speed exhibits a given overshoot (OS) and settling time (t s ). Based on these gains, we designed an initial set of particles that were subsequently modified using PSO algorithm with the goal of reducing OS and t s . Simulation results led to the desired swarming since the PID gains converged, causing the OS and t s to get below the desired threshold. Thus, our proposed PID-PSO algorithm was very successful in achieving the optimization goal of reducing OS and t s by means of PSO-optimized PID control.

Benedicta B. Obeng, Marc Karam
98. Use of Intelligent Water Drops (IWD) for Intelligent Autonomous Force Deployment

This paper presents a decentralized method for autonomously directing the movement of troops or battlefield robots from areas where their capabilities are being underutilized to areas where they are needed. This technique, which relies on limited message passing, does not require a centralized controller and is thus well suited to the battlefield environment where natural or deliberately created conditions may limit communications or render a centralized controller inaccessible. The Colonel Blotto Game (simulation scenario) is extended to provide a testing framework for Intelligent Water Drops (IWD)-derivative methods. The performance of the conventional approach to the Colonel Blotto Game is characterized in application to this extended scenario. Then, an IWD approach is presented and its performance is compared to the conventional method. The IWD approach is shown to outperform the conventional approach, from a gameplay perspective, while having significantly greater processing costs. Finally, the performance of an extended approach, which plays out possibilities for the remainder of the game multiple times before making a decision, is compared with an approach based on making the best decision in the short term without extended network information. The gameplay utility of this extended solver is not demonstrated, despite it having significantly higher computational costs.

Jeremy Straub, Eunjin Kim
99. Match-the-Sound CAPTCHA

The ubiquity of the Internet has led to develop security measures to protect its services against abusive and malicious attacks. Most of the mercantile websites extensively use CAPTCHAs as a security measure against illegal bot attacks. Their purpose is to distinguish between humans and bot programs in order to defend web services from the bot programs. In this paper, we propose a CAPTCHA scheme that is based on cognitive abilities of human users. We name this scheme as “Match-the-Sound CAPTCHA” or “MS-CAPTCHA”. The users are supposed to choose the best-matched object from the rest of the images in order to prove themselves as humans after listening to the sound. A study of 50 candidates is performed to check the performance of our proposed scheme. We also conduct a feedback survey from the candidates to explore the usability features of the proposed MS-CAPTCHA. Our study shows that the users get annoyed with distorted text, audio, and background clutter, whereas they enjoy more image-based and simple audio-based MS-CAPTCHAs.

Noshina Tariq, Farrukh Aslam Khan

Health, Bioinformatics, Pattern Detection and Optimization

Frontmatter
100. Information Technology – Next Generation: The Impact of 5G on the Evolution of Health and Care Services

As more and more details of 5G technology specifications unveil and standards emerge it becomes clear that 5G will have an enabling effect on many different verticals including automotive, mobility and health. This paper gives an overview about technical, regulatory, business and bandwidth requirements of health care applications including e-connectivity in the pharmaceutical domain, medical device maintenance management, hospital at home, supply chain management, Precision and Personalized medicine, robotics and others based on latest research activity in the field.

Christoph Thuemmler, Alois Paulin, Thomas Jell, Ai Keow Lim
101. Testing Cloud Services Using the TestCast Tool

This work presents the testing requirements for cloud services including unit and integration testing by identifying services that could communicate with each other according to their APIs. We also present the Elvior TestCast T3 (TTCN-3) testing tool that provides an efficient and easy to use solution for automating functional tests. This allows incremental development where users can test specific systems and features separately as well as the entire system as a whole. We finally demonstrate the empirical results as lessons learned from our experiences when apploying such solution in testing real world cloud health services.

Stelios Sotiriadis, Andrus Lehmets, Euripides G. M. Petrakis, Nik Bessis
102. Evaluation of High-Fidelity Mannequins in Convulsion Simulation and Pediatric CPR

This article’s objective was evaluating the resources and the fidelity of SimBaby’s mannequin for Cardiopulmonary Resuscitation (CPR) and pediatric convulsion training. Education based on training through simulation resorts frequently to high-fidelity mannequins for practicing CPR skills. However, if the simulated patient is not realistic enough, the learning process is compromised. The evaluation of said mannequins’ realism is scarce in specialized literature. Methodology: A group of engineers, medics and nurses was chosen for the purposed evaluation. First, the designation of some members of the team as first-aiders for the infant’s treatment. The recognition of convulsion signals and the need of CPR was handled by algorithms created for this project. On the second moment of evaluation, the first-aiders executed incorrect CPR maneuvers to evaluate the mannequin’s feedback accuracy. Results: the first-aiders recognized the pulse and cyanosis, but did not recognize the convulsion and the inadequate perfusion. The simulator does not distinguish correct or incorrect maneuvers. Conclusion: SimBaby exhibits high technology, but lacks realism simulating convulsions and feedback to CPR.

Paôla de O. Souza, Alexandre C. B. Ramos, Leticia H. Januário, Ana A. L. Dias, Cristina R. Flôr, Heber P. Pena, Helen C. T. C. Ribeiro, Júlio C. Veloso, Milla W. Fiedler
103. wCReF – A Web Server for the CReF Protein Structure Predictor

The prediction of protein tertiary structure is a problem of Structural Bioinformatics still unsolved by science. The challenge is to understand the relationship between the amino acid sequence of a protein and its three-dimensional structure, which is related to the function of these macromolecules. Among the methods related to protein structure prediction is CReF (Central Residue Fragment-based Method) proposed by Dorn & Norberto Souza. Here we present wCReF, the Web interface for the CReF method developed with a focus on usability. With this tool, users can enter the amino acid sequence of their target protein, and get as a result the approximate 3D structure of a protein without the need to install all the multitude of necessary tools for their use. In order to create an interface that take usability in consideration, we have conducted a study to analyze usability of similar servers (I-TASSER, QUARK and Robetta), guided by experts on both Human-Computer Interaction and Bioinformatics domain areas, using the Nielsens’ Heuristic Evaluation method. Evaluation results have served as guiding orientation to design the key features that wCReF must have and then, develop the first version of the interface. As a final product we present the wCReF protein structure prediction server. Furthermore, this study can contribute to improve the usability of existing bioinformatics applications, the prediction servers analyzed and the development of new scientific tools.

Vanessa Stangherlin Machado, Michele dos Santos da Silva Tanus, Walter Ritzel Paixão-Cortes, Osmar Norberto de Souza, Márcia de Borba Campos, Milene Selbach Silveira
104. Automating Search Strings for Secondary Studies

Background: secondary studies (SSs), in the form of systematic literature reviews and systematic mappings, have become a widely used evidence-based methodology to to create a classification scheme and structure research fields, thus giving an overview of what has been done in a given research field.Problem: often, the conduction of high-quality SSs is hampered by the difficulties that stem from creating a proper “search string”. Creating sound search strings entails an array of skills and domain knowledge. Search strings are ill-defined because of a number of reasons. Two common reasons are (i) insuffient domain knowledge and (ii) time and resource constraints. When ill-defined search strings are used to carry out SSs, a potentially high number of pertinent studies is likely to be left out of the analysis.Method: to overcome this limitation we propose an approach that applies a search-based algorithm called Hill Climbing to automate this key step in the conduction of SSs: search string generation and calibration.Results: we conducted an experiment to evaluate our approach in terms of sensibility and precision. The results would seem to suggest that the precision and the sensibility our approach are 25.2% and 96.2%, respectively.Conclusion: The results were promising given that our approach was able to generate and calibrate suitable search strings to support researchers during the conduction of SSs.

Francisco Carlos Souza, Alinne Santos, Stevão Andrade, Rafael Durelli, Vinicius Durelli, Rafael Oliveira
105. Visual Approach to Boundary Detection of Clusters Projected in 2D Space

Data mining tasks are commonly employed to aid users in both dataset organization and classification. Clustering techniques are important tools among all data mining techniques because no class information is previously necessary – unlabeled datasets can be clustered only based on their attributes or distance matrices. In the last years, visualization techniques have been employed to show graphical representations from datasets. One class of techniques known as multidimensional projection can be employed to project datasets from a high dimensional space to a lower dimensional space (e.g., 2D space). As clustering techniques, multidimensional projection techniques present the datasets relationships based on distance, by grouping or separating cluster of instances in projected space. Usually, it is difficult to detect the boundary among distinct clusters presented in 2D space, once they are projected near or overlapped. Therefore, this work proposes a new visual approach for boundary detection of clusters projected in 2D space. For that, the attributes behavior are mapped to graphical representations based on lines or colors. Thus, images are computed for each instance and the graphical representation is used to discriminate the boundary of distinct clusters. In the experiments, the color mapping presented the best results because it is supported by the user’s pre-attentive perception for boundary detection at a glance.

Lenon Fachiano Silva, Danilo Medeiros Eler
106. Opposition-Based Particle Swarm Optimization Algorithm with Self-adaptive Strategy

In view of the problems that the standard Particle Swarm Optimization(PSO) algorithm tend to fall into premature convergence and slow convergence velocity and low precision in the late evolutionary, an Opposition-based Particle Swarm Optimization Algorithm with Self-adaptive Strategy (SAOPSO) is proposed. The algorithm uses an adaptive inertial weighting strategy to balance the ability of global search and local exploration; Meanwhile, a strategy of opposition-based learning is adopted on the elite particle population, which will improve learning ability of particle, expand the search space and enhance the global search capability; In order to avoid falling into the local optimum, which may cause search stagnation, Cauchy mutation strategy with a adaptive probability value is presented to disturb the current global optimal particle. SAOPSO algorithm is compared with other improved PSO on 5 classic benchmark functions, and the experimental results show that ASOPSO algorithm is improved in convergence speed and accuracy of solution.

Xuehan Qin, Yi Xu

Education, Applications and Systems

Frontmatter
107. Recruitment Drive Application

Today’s recruitment applications are designed not only reduce paperwork but can make a significant contribution to a company’s marketing and sales activity. Recruitment websites and software make possible for managers to access information crucial to managing their staff, which they can use for promotion decisions, payroll considerations and hiring. We present a web-based solutions to the recruitment process for small companies in which job seekers (users) can register to a created job position, manage their information, and be informed of the various steps of the hiring process. The project will address three categories of people: a single admin, a number of staff personnel, and a very large number of (about 1000) applicants. This software is very useful to small companies. It reduces the paperwork, takes less time, is very transparent to the users, and is available through Internet.

Raghavendar Cheruku, Doina Bein, Wolfgang Bein, Vlad Popa
108. Evaluating Assignments Using Grading App

Grading App is a cross-platform desktop application for Windows, Linux, Mac OS X for evaluating student assignments and providing them with a grade and detailed feedback on TITANium. The Grading App can be used for any course in any department. The faculty used the app successfully to grade three sets of assignments (in Fall 2015) and two sets (in Spring 2016) and her grading time was reduced by 80% as compared to manual grading, while providing detailed feedback to the students.

Nikyle Nguyen, Doina Bein
109. A Heuristic for State Power Down Systems with Few States

Power-down mechanisms are well known and are widely used to save energy. We consider a device which has states OFF, ON, and a fixed number of intermediate states. The state of the device can be switched at any time. In the OFF state the device consumes zero energy and in ON state it works at its full power consumption. The intermediate states consume only some fraction of energy proportional to the usage time but switching back to the ON state has different constant setup cost depending on the current state. We give a new heuristic to construct optimal power-down systems with few states. The heuristic converges very quickly to an optimal solution.

James Andro-Vasko, Wolfgang Bein, Hiro Ito, Govind Pathak
110. Internet Addiction in Kuwait and Efforts to Control It

Internet Addiction has reached an epidemic level worldwide. Since the 1990’s the Internet has exploded to become an important part of our daily live. It was best described as a sword with two edges. On one side, it brought the whole world to our fingertips. On the other side, the excessive use of it can and will lead to a state of mental and psychological disorder, hence the term Internet Addiction Disorder IAD. Kuwait, a tiny nation in the Arab Gulf countries was the first in the area to shed some light on the problem in 2009 by conducting a public awareness campaign in the traditional media. Since then, the government, along with other organizations, started to take some measures to control this disorder without any success because most of the measures were restrictive in nature rather than positive. This paper will revisit the Internet addiction scene among university students in Kuwait after 8 years from publishing the first paper as measured by the Internet Addiction Test (IAT) to measure the level of awareness and percentage of highly addicted students compared to the early results, and describe the efforts taking place to control it at the government, organization, and family level.

Samir N. Hamade
111. Approaches for Clustering Polygonal Obstacles

Clustering a set of points in Euclidean space is a well-known problem having applications in pattern recognition, document image analysis, big-data analytics, and robotics. While there are a lot of research publications for clustering point objects, only a very few articles have been reported for clustering a distribution of obstacles. In this paper we examine the development of efficient algorithms for clustering a set of convex obstacles in the 2D plane. We present two approaches for developing efficient algorithms for extracting polygonal clusters. While the first method is based on using the nearest neighbor computation, the second one uses reduced visibility graph induced by polygonal obstacles. We also consider the extensions of the proposed algorithms for non-convex polygonal obstacles.

Laxmi P. Gewali, Sabbir Manandhar

Short Papers

Frontmatter
112. Usability in Computer Security Software

Nowadays, where software features such as usability and security are considered ideal, there are currents that at the same time consider these two qualities as adverse, as increasing the standards of one, the other tends to decrease. It is because of the above that it is important to evaluate the usability characteristics in software that focus its operation in the area of information security, through a measurement instrument such as the evaluation heuristic, in order to know the level of usability In this type of software.

Claudio Casado Barragán, Miguel Palma Esquivel, Cristián Barría Huidobre, Cristián Rusu, Cesar Collazo, Clara Burbano
113. Techniques for Detecting, Preventing and Mitigating Distributed Denial of Service (DDoS) Attacks

Even though Internet appears to be one of the successful phenomena of globalization today, web applications, services, and servers are being challenged by multiple vulnerabilities due to multiple penetrations. These security flaws can easily be exploited by malicious actors who will use malware to launch DDoS to damage critical infrastructures in small and large businesses putting their productivity and trust at risk. This paper offers methods that public and private sectors can consider to lessen damages cause by DDoS. The detective techniques will help uncover some early signs of malicious activities in the organization’s network. The preventive ones will ensure all methods have been implemented to stop the intrusion from happening. Findings have demonstrated that mitigation mechanism can only be effective with detective and preventive methods. It is vital to keep in mind that attackers are busy developing sophisticated tools to disrupt services and damage systems making traditional security tools ineffective. They need to be replaced by robust security technologies to protect networked systems efficiently as presented in this research. Security awareness as an important network security practice, will educate non-IT professionals, serve as a reminder to IT professionals and result in thwarting insider threats. When all these are successfully implemented, an attacker’s chances of launching a successful distributed denial-of-service attack are reduced by 2%.

Judith Clarisse Essome Epoh
114. Next-Generation Firewalls: Cisco ASA with FirePower Services

This paper aims to provide a comprehensive review of the Cisco ASA next-generation firewall with FirePower Services. The product will be introduced as to its purposes and features of why an organization would want to deploy it as a security product in an enterprise or otherwise large scale network. This paper will give insight into the technology behind the Cisco ASA as well as additional features that the FirePower Services adds. I will cover some of the strengths found on the FirePower platform that Cisco offers such as signature-based threat detection and Snort, as well as some of the limitations the platform has compared to other leading vendors in the network security world such as the lack of SSL inspection as the time of this writing.

Taylor J. Transue
115. DDoS Attacks: Defending Cloud Environments

The migration of many organizations to cloud-computing environments alters the traditional definition of Distributed Denial of Service (DDoS) targets. This paper presents several techniques to defend, prevent and mitigate Distributed Denial of Service (DDoS) attacks in cloud computing environments. The methods presented in the paper examine the integration of the network as part of the network protection solution with technologies such as shuffling targets, filtering network traffic with BGP and implementing predictive algorithms for future attacks.

Michele Cotton
116. Cyber Security Policies for Hyperconnectivity and Internet of Things: A Process for Managing Connectivity

Hyperconnectivity and Internet of Things are changing the landscape of Information Technology (IT). Architectures are becoming more sophisticated while cyber security is slowing adapting to this shift of Internet-enabled technologies in automotive, industrial, consumer, and networking. This slow adoption of proper security controls, defenses, and aggressive measures is leaving individuals vulnerable. This submission explores how policies can be created that automate the process of device connectivity, and how current frameworks can be used to minimize system risks.

Maurice Dawson
117. DDoS Countermeasures

This paper explores three published research articles demonstrating solutions to detect, prevent, and mitigate Distributed Denial of Service (DDoS) attacks. These articles stress the critical severity of DDoS attacks impacting businesses and entities on a worldwide scale and present effective strategies to counter such threats. Cepheli et al. (Journal of Electrical & Computer Engineering 2016: 1–8, 2016) proposes a Hybrid Intrusion Detection System (H-IDS) combining anomaly-based and signature-based intrusion detection mechanisms (Journal of Electrical & Computer Engineering 2016: 1–8, 2016). Kalkan and Alagoz (Computer Networks the International Journal of Computer and Telecommunications Networking 108:199, 11, 2016) presents an active filtering mechanism to stop a DDoS at its origin (Computer Networks the International Journal of Computer and Telecommunications Networking 108:199, 11, 2016). Gupta et al. (International Journal of Computer and Electrical Engineering (IJCEE) 2:268–276, 2010) stress the importance of implementing defense in depth strategies and maintaining fundamental security processes (International Journal of Computer and Electrical Engineering (IJCEE) 2:268–276, 2010). Solutions in these articles are demonstrated in testing and well documented to be highly successful.

Michael Wisthoff
118. Removal of Impulsive Noise From Long Term Evolution Handset

The Long Term Evolution (LTE) handsets operates in the frequency bands which are being affected by the impulsive noise (IN). The double detection method is used to remove IN from the LTE handset. This method uses the conventional threshold method where threshold selection is the major aspect. The LTE handsets consists of two transmitting and receiving antennas, so the output should be considered by the comparison of the two different frequencies received at the receiver. The better signals within the threshold level is mainly considered.

Lina Pawar, D. G. Khairnar, Akhil Sureshrao Kulkarni
119. A Synchronization Rule Based on Linear Logic for Deadlock Prevention in Interorganizational WorkFlow Nets

This paper presents an approach based on the analysis of Linear Logic proof trees and Interorganizational WorkFlow nets (IOWF-nets) theory to prevent deadlock situations in interorganizational workflow (IOWF) processes. An IOWF can be locally sound but not globally sound. Deadlock situations in IOWF processes come then from message ordering mismatches between several local workflow processes. Scenarios of IOWF-nets can be characterized by sequents of Linear Logic. The analysis of Linear Logic proof trees can be used to detect communication places between distinct WorkFlow nets (WF-nets) that introduce deadlock situations. In this paper, a synchronization rule is proposed in order to prevent deadlock situations caused by asynchronous communication mechanisms used to allow collaboration among individual workflow processes in IOWF processes that are locally sound but not necessarily globally sound.

Vinícius Ferreira de Oliveira, Stéphane Julia, Lígia Maria Soares Passos, Kênia Santos de Oliveira
120. Summary Report of Experimental Analysis of Stemming Algorithms Applied to Judicial Jurisprudence

Stemming algorithms are commonly used during textual preprocessing phase in order to reduce data dimensionality. However, this reduction presents different efficacy levels depending on the domain it is applied. Hence, this work is an experimental analysis about the dimensionality reduction by stemming a veracious base of judicial jurisprudence formed by four subsets of documents. With such document base, it is necessary to adopt techniques that increase the efficiency of storage and search for such information, otherwise there is a loss of both computing resources and access to justice, as stakeholders may not find the document they need to plead their rights. The results show that, among the stemming algorithms analyzed, the RSLP algorithm was the most effective in terms of dimensionality reduction in the four collections studied.

Robert A. N. de Oliveira, Methanias C. Junior
121. Applying Collective Intelligence in the Evolution of a Project Architecture Using Agile Methods

This paper presents a research using Collective Intelligence combined with the Agile Methods and its best practices to assist in the creation, adaptation, and evolution of design architecture for a data analysis system. In order to achieve this, three courses were integrated, from the graduate program in Electronic and Computer Engineering, at the Brazilian Aeronautics Institute of Technology (Instituto Tecnologico de Aeronautica – ITA): CE-240 Database Systems Project; CE-245 Information Technologies; and CE-229 Software Testing.

Ciro Fernandes Matrigrani, Talita Santos, Lineu Fernando Stege Mialaret, Adilson Marques da Cunha, Luiz Alberto Vieira Dias
122. Development of Human Faces Retrieval in a Big Photo Database with SCRUM: A Case Study

In this paper we present a case study of the use of SCRUM in an applied research scenario, consisting of the detection and identification of human faces in a large database of photos. The study shows how the adoption of agile methods was important to deal with uncertainty and unstable requirements, allowing adaptable development with opportunity to learn from the development process. The implementation of experiments reveals, as our research advances, the applicability of techniques to support the solution for the requirements. All sort of problems with illumination, pose, low quality of image and other type of noise were present as obstacles to accuracy in different approaches for detection and recognition. Fast retrieval of similar faces was also necessary. In the end, the requirements were successfully tackled by developing an efficient and short implementation of face alignment, feature extraction and search.

Thoris Angelo Pivetta, Carlos Henrique Quartucci Forster, Luiz Alberto Vieira Dias, Eduardo Martins Guerra
123. An Agile Developed Interdisciplinary Approach for Safety-Critical Embedded System

Accidents and crises, whether climatic, economic, or social are undesirably frequent in everyday lives. In such situations, lives are sometimes lost because of inadequate management, lack of qualified and accurate information, besides other factors that prevent full situational awareness. The goal of this work is to report on an academic conceptualization, design, build, test, and demonstration of computer systems, to manage critical information, during hypothetical crises. During the development of an academic system in the second Semester of 2015 at the Brazilian Aeronautics Institute of Technology, the following challenges occurred: strict specifications, agile methods, embedded systems, software testing, and product assessment. Also, some quality, reliability, safety, and testability measurements have been used. At that time, an Interdisciplinary Problem-Based Learning (IPBL) was performed, adding hardware technologies of environment sensors, Radio Frequency Identification (RFID), and Unmanned Aerial Vehicles (UAVs). Software technologies were used for cloud-based web-responsive platform and a mobile application to geographically manage resources at real-time. Finally, the ANSYS® SCADE (Safety-Critical Application Development Environment) was employed to support the embedded and safety-critical portion of this system.

Gildarcio Sousa Goncalves, Rafael Shigemura, Paulo Diego da Silva, Rodrigo Santana, Erlon Silva, Alheri Dakwat, Fernando Miguel, Paulo Marcelo Tasinaffo, Adilson Marques da Cunha, Luiz Alberto Vieira Dias
124. The Implementation of the Document Management System “DocMan” as an Advantage for the Acceleration of Administrative Work in Macedonia

The history of e-government in Macedonia goes back to 2000, with the law for electronic data and that for the electronic signature. From then and until now the Republic of Macedonia has developed significantly in terms of e-government.Almost all governmental institutions in Macedonia have build applications which will ease the process of dealing with citizens.In this paper we will analyze the implementation of the “DocMan” application in the Secretariat for European Affairs, which consists of a system for the management with documents.

Daut Hajrullahi, Florim Idrizi, Burhan Rahmani
Backmatter
Metadata
Title
Information Technology - New Generations
Editor
Prof. Shahram Latifi
Copyright Year
2018
Electronic ISBN
978-3-319-54978-1
Print ISBN
978-3-319-54977-4
DOI
https://doi.org/10.1007/978-3-319-54978-1

Premium Partner