Skip to main content
Top

2018 | Book

Security with Intelligent Computing and Big-data Services

insite
SEARCH

About this book

In the dawning era of Intelligent Computing and Big-data Services, security issues will be an important consideration in promoting these new technologies into the future. This book presents the proceedings of the 2017 International Conference on Security with Intelligent Computing and Big-data Services, the Workshop on Information and Communication Security Science and Engineering, and the Workshop on Security in Forensics, Medical, and Computing Services and Applications. The topics addressed include: Algorithms and Security Analysis, Cryptanalysis and Detection Systems, IoT and E-commerce Applications, Privacy and Cloud Computing, Information Hiding and Secret Sharing, Network Security and Applications, Digital Forensics and Mobile Systems, Public Key Systems and Data Processing, and Blockchain Applications in Technology. The conference is intended to promote healthy exchanges between researchers and industry practitioners regarding advances in the state of art of these security issues. The proceedings not only highlight novel and interesting ideas, but will also stimulate interesting discussions and inspire new research directions.

Table of Contents

Frontmatter

Algorithms and Security Analysis

Frontmatter
A Social Tagging Recommendation Model Based on Improved Artificial Fish Swarm Algorithm and Tensor Decomposition

Folksonomy Tag Application (FTA) has emerged as an important approach of Internet content organization. However, with the massive increase in the scale of data, the information overloading problem has been more severe. On the other hand, traditional personalized recommendation algorithms based on the interaction between “user-item” are not easy to extend to the three dimensional interface of “user-item-tag”. This paper proposes a clustering analysis method for the initial dataset of the Tag Recommendation System (TRS) based on the improvement of Artificial Fish Swarm Algorithm (AFSA). The method is used for dimension reduction of TRS datasets. To this end, considering the weight of the elements in TRS and the score that can reveal user preference, a novel weighted tensor model is established. And in order to complete the personalized recommendation, the model is solved by the tensor decomposition algorithm with dynamic incremental updating. Finally, a comparative analysis between the proposed FTA algorithm and the two classical tag recommendation algorithms is conducted based on two sets of empirical data. The experimental results show that the FTA algorithm has better performance in terms of the recall rate and precision rate.

Hao Zhang, Qiong Hong, Xiaomeng Shi, Jie He
Research on the Nearest Neighbor Representation Classification Algorithm in Feature Space

Representation-based classification and recognition, such as face recognition, have dominant performance in dealing with high-dimension data. However, for low-dimension data the classification results are not satisfying. This paper proposes a classification method based on nearest neighbor representation in feature space, which extends representation-based classification to nonlinear feature space, and also remedies its drawback in low-dimension data processing. First of all, the proposed method projects the data into a high-dimension space through a kernel function. Then, the test sample is represented by the linear combination of all training samples and the corresponding coefficients of each training sample will be obtained. Finally, the test sample is assigned to the class of the training sample with a minimum distance. The results of experiments on standard two-class datasets and ORL and YALE face databases show that the algorithm has better classification performance.

Yan-Hong Hu, Yu-Hai Li, Ming Zhao
A New Aesthetic QR Code Algorithm Based on Salient Region Detection and SPBVM

Many aesthetic QR code algorithms have been proposed. In this paper, a new aesthetic QR code algorithm, based on salient region detection and Selectable Positive Basis Vector Matrix (SPBVM), is proposed. Firstly, the complexity of texture features are added to calculate the saliency values, based on the existing salient region detection algorithm. According to the saliency map, the important area of the image is preserved for the subsequent beautification operation. Then, the appropriate basis vectors are selected by using the proposed SPBVM according to the acquired salient region, and the salient region is displayed completely by XOR operation which is performed by the generated original QR code and the selected basis vectors. Finally, the aesthetic QR code is obtained by combining the background image and the original QR code. The results show that the pro-posed algorithm can produce more accurate salient area and have more pleasant visual effect.

Li Li, Yanyun Li, Bing Wang, Jianfeng Lu, Shanqing Zhang, Wenqiang Yuan, Saijiao Wang, Chin-Chen Chang
Compact Cat Swarm Optimization Algorithm

A compact cat swarm optimization algorithm (cCSO) was proposed in this paper. it keeps the same search logic of cat swarm optimization (CSO), i.e. tracing mode and seeking mode, on the other hands, cCSO inherits the main feature of compact optimization algorithms, a normal probabilistic vector is used to generate new individuals, the mean and the standard deviation of the probabilistic model could lead cats to the searching direction in next step. Only a cat is adopted in the algorithm, thus, it could run with modest memory requirement. Experimental results show that cCSO has better performance than some compact optimization algorithms in some benchmark functions test. The convergence rate is also a highlight among compact optimization algorithms.

Ming Zhao, Jeng-Shyang Pan, Shuo-Tsung Chen

Cryptanalysis and Detection Systems

Frontmatter
Copy-Move Forgery Detection Based on Local Gabor Wavelets Patterns

Nowadays digital images are more and more easily to be modified or tampered intentionally by most people due to the rapid development of powerful image processing software. Various methods of digital image forgery exist, such as image splicing, copy-move forgery, and image retouching. Copy-move is one of the typical image forgery methods, in which a part of an image is duplicated and used to replace another part of the same image at a different location. In this paper, we proposed a block-based passive detect copy-move forgery detection method based on local Gabor wavelets patterns (LGWP) with the advantages of high performance texture analysis of Gabor filter and rotation-invariant ability of uniform local binary pattern (LBP). Experiment results demonstrate the ability of the proposed method to detect copy-move forgery and precisely locate the duplicated regions, even when the forgery images are distorted by JPEG compression, blurring, brightness adjustment and rotation.

Chao-Lung Chou, Jen-Chun Lee
A Hybrid Intrusion Detection System for Contemporary Network Intrusion Dataset

We propose a hybrid intrusion detection approach to detect network anomalies. The proposed approach uses a feature discrete method and a cluster analysis algorithm to separate the training samples into two groups, normal and anomaly groups, and then a new classification model is built to improve the performance of the sub group classification. We discretize the features of training samples by the method considering the interdependence between features and labels. Class information is added into the attributes to enhance the clustering results. For the anomaly group, several representative features are selected to construct a classification model to improve the overall classification performance. Two efficient machine learning algorithms, the Decision Tree algorithm and the Bayesian Network algorithm, are adopted in our experiment. The experiment results show that our method can increase both the normal and anomaly detection rate, precision and accuracy. For the classification of new types of modern attacks, our approach also can improve the overall accuracy.

Jheng-Mo Liao, Jui-Sheng Liu, Sheng-De Wang
Mitigating DoS Attacks in SDN Using Offloading Path Strategies

Software-Defined Networks (SDNs) were created to facilitate the management and control of the network. However, the security problem is still unresolved. To avoid the DoS attacks caused by links exceeding the bandwidth load (such as traffic flooding and security loopholes), the most simple mitigation solution is to offload the data by transferring it to other links. However, the transfer of information could lead to high bandwidth loads on other links. To overcome this problem, this paper proposes a method called “Avoid Passing High Utilization Bandwidth (APHUB),” which aims to (1) prevent the unloaded data putting additional load on the links when passing through the high bandwidth and (2) find a suitable new path. A comparison of the maximum bandwidth utilization using the proposed method with other algorithms showed that this method consistently produced the smallest bandwidth utilization; we thus consider it a better mitigation method than those presented previously.

Tai-Siang Huang, Po-Yang Hsiung, Bo-Chao Cheng
An Extension of Attack Trees

Attack trees provide a model to describe the security of a system based on the possibility of various attacks. In this paper, we propose the concept of “attack graphs” as an extension of attack trees, wherein directed acyclic graphs are used to depict possible attacks on a system. By deploying this model, system managers can discern all possible threats to the system and thus are more likely to design efficient countermeasures to thwart those attacks. Within this model, we also propose the concept of the most dangerous path in the attack graph, and finally propose an algorithm to expose it.

Yi-Chih Kao, Yuan-Ping Hwang, Shih-Chen Wang, Sheng-Lung Peng
Face Detection in a Complex Background Using Cascaded Conventional Networks

Although significant achievements have been achieved in the field of face detection recently, face detection under complex background is still a challenge issue. Especially, face detection has wide applications in real life, such as face recognition attendance system and crowd size estimation. In this paper, we propose a novel cascaded framework to tackle the challenges based on: blur, illumination, pose, expression and occlusion. Our framework adopt the localization of facial landmarks to boost up their performance. In addition, our detector extracts features from different layers of a deep residual network for complementary information of low-dimensional and high-dimensional features. Our method achieves notable results over the state-of-the-art techniques on the challenging WIDER FACE benchmark for face detection and our results show that average precision of 89.2%. Importantly, we demonstrate superior performance and robustness in a challenging environment.

Jianjun Li, Juxian Wang, Chin-Chen Chang, Zhuo Tang, Zhenxing Luo
Cryptanalysis on the Anonymity of Li et al.’s Ciphertext-Policy Attribute-Based Encryption Scheme

Attribute-based encryption is a very powerful primitive in public-key cryptography. It can be adopted in many applications, such as cloud storage, etc. To further protect the privacy of users, anonymity has been considered as an important property in an attribute-based encryption. In an anonymous attribute-based encryption, the access structure of a ciphertext is hidden from users. In this paper, we find an attack method against Li et al.’s anonymous attribute-based encryption schemes. The proposed attack uses an “invalid attribute key” to recover the hidden access structure of a given ciphertext. No information of the master secret key nor private keys are necessary in our attack.

Yi-Fan Tseng, Chun-I Fan
Overlapping Community Detection with Two-Level Expansion by Local Clustering Coefficients

Community detection is crucial to Social Network Analysis (SNA) in that it helps to discover high-density overlapping communities hidden in complex networks for advanced applications. This study proposed a novel community detection method by seed set expansion. The method gathered meaningful nodes into a seed set, which was then used as a central node to merge neighbor nodes until communities were found. To enhance efficiency, a two-level expansion approach was further developed, which adopted the 80/20 rule and involved threshold change in order to discover cohesive subgroups of smaller sizes. To detect overlapping communities, local clustering coefficients (LCC) were calculated to measure the interaction density between neighbor nodes and determine whether they expanded or not. The experiment results were evaluated by measuring the cohesion quality of communities.

Yi-Jen Su, Che-Chun Lee

IoT and E-commerce Applications

Frontmatter
Writing Security Specification with Things That Flow

In the field of security, writing a Request For Proposals (RFP) includes a description of specifications that requires careful definition of problems and an overview of how the system works. An important aspect in this context is how to generate technical specifications within the RFP. This “specification writing” is a complex subject that causes even design professionals such as architects and engineers to struggle. Typically an RFP is described in English, with graphs and tables, resulting in imprecise specifications of requirements. It has been proposed that conceptual representation such as UML diagrams and BPMN notations be included in any RFP. This paper examines RFP development of Public Key Infrastructure (PKI) and proposes a conceptual depiction as a supplement to the RFP to clarify requirements more precisely than traditional tools such as natural language, tables, and ad hoc graphs. A case study of an actual government ministry is presented with a model, i.e., diagrams that express how the features and services of PKI would logically operate in the requisite system.

Sabah Al-Fedaghi, Omar Alsumait
Automatically Generating Aspect Taxonomy for E-Commerce Domains to Assist Sentiment Mining

Numerous reviews are available online for many domains, and increasingly even for singular products. In this scenario, aspect associations to domains can be made extensive. Instead of generating aspects from the training set of reviews for a domain, the task of aspect generation is pushed onto an automated taxonomy generation system. Based on certain user input parameters, the taxonomy is expanded using an unsupervised web crawl of E-Commerce Website(s). The aspect taxonomy can be used to assist researchers in annotation of reviews to use for training classifiers for sentiment analysis, and for visualization of sentiment analysis results.

Nachiappan Chockalingam
History Management for Network Information of IoT Devices

In an Internet of Things (IoT) environment, forensics is commonly used to perform accident analysis through network communication data and the existing memory and logs in a device. Network traffic and memory are volatile data, however, and IoT device logs pose difficulties in information retrieval as opposed to a PC environment due to device and environmental constraints. To do this, we will discuss history management of network information to analyze an accident. History management can be performed on 13 items including IP, firmware version, port number, protocol, service version, and vulnerability information associated with it, and selection of the time and object of infringement can be done by using the Euclidean distance for changeable data.

Daeil Jang, Taeeun Kim, Hwankuk Kim

Privacy and Cloud Computing

Frontmatter
A Noise Generation Scheme Based on Huffman Coding for Preserving Privacy

The cloud computing technique rises in these years. Due to cloud computing techniques have some features including low cost, robustness, flexibility and ubiquitous nature. The data in organization will increase immediately. A large number of data can be used on many applications of data analysis involves business, medical and government. But it has some privacy issues, if dealer wants to understand their customer behavior for requirement of marketing, they may publish data into data analysis company, third-party, to analysis. To preserve privacy in database, this paper proposes an efficient noise generation scheme which is based on Huffman coding algorithm. The features of Huffman coding algorithm are a character with lower occurrence frequency has longer code and vice versa. It is suitable to be applied on protecting privacy on database, that tuple with lower occurrence frequency has more noise. The paper presents a noise matrix, a set of noise, which is based on this concept. Although this scheme may lead to data distortion by replace original value, but does not affect to data analysis. In the section of experiments, we consider running time of noise generation with integer number and real number. Overall, this paper shares different concept to perturb original value and propose an efficient data perturbation scheme.

Iuon-Chang Lin, Li-Cheng Yang
Privacy-Preserving Outsource Computing for Binary Vector Similarity

The preservation of privacy has become a widely discussed topic on the Internet. Encryption is an approach to privacy; however, to outsource computing to an cloud service without revealing private information over encrypted data is difficult. Homomorphic encryption can contribute to it but is based on complicated mathematical structures of abstract algebra. We propoase a new scheme for securely computing the similarity between binary vectors through a cloud server. The scheme is constructed from ciphertext policy attribute based encryption and garbled circuits rather than homomorphic encryption. Attribute based encryption provides the access power, which is a necessary primitive in our scheme. Moreover, for computing over encrypted data, we rely on garbled circuits to handle secure outsourcing and to avoid the use of homomorphic encryption.

Dan Yang, Yu-Chi Chen, Shaozhen Ye
Strategies to Improve Auditing Performance and Soundness for Cloud Computation

Since the cloud computation auditing becomes important recently, Wei et al. proposed their cloud computation auditing scheme. However, they assume that the cheating adversary always gives the random response for the auditing challenges. This assumption is impractical. When only a small part of adversary’s response is random, the number of challenges is increased dramatically. Then the auditing load becomes so heavy that the auditor cannot give the auditing results in reasonable time. Moreover, the probability of finding out incorrect computed results cannot reach that the users want. To improve the on-line audit performance or probability, the off-line easy-auditor improving strategy, the function-based improving strategy, and mixed strategy are proposed, respectively. Utilizing the off-line computation concept and the cloud computation server help, the online audit performance, and the audit probability will be improved.

Shin-Jia Hwang, Tsung-Lin Li

Information Hiding and Secret Sharing

Frontmatter
Reversible Image Steganography for Color Image Quantization Based on Lossless Index Coding

In this paper, we proposed a joint lossless index coding and data hiding technique for the palette images. The palette image is the compressed image of the color image quantization technique. The compressed codes of the palette image consist of the index table and the color palette. In the proposed technique, a three-category lossless index coding method is employed. The secret data is embedded into the encoded index table during the index coding process is executed. From the results, it is shown that good hiding capacity is obtained in the proposed technique while keeping a good bit rate.

Yu-Chen Hu, Chin-Feng Lee, Yi-Hung Liu
Capacity on Demand Steganography Through Adaptive Threshold Strategy

In the process of secret communication, in fact, the length of each transmission of ciphertext will never be the same. In order to meet dynamic length of ciphertext and provide a good image quality, a novel adaptive threshold strategy is presented. Firstly, a threshold pixel value and a k value are decided dynamically based on the length of ciphertext and the cumulative statistics of cover-image’s pixel value. Then if a pixel value is more than threshold value, more than k bits secret data can be embedded by using modified Least Significant Bit (LSB) substitution method, or the data of equal or less than k bits can be embedded. This dynamic strategy can adjust hiding capacity consistent with the length of secret message, and the secret data can be embedded in stego-image as evenly as possible. Quality of stego-image would be improved by this way. The experimental results, the proposed method not only achieves a larger embedding capacity, but also has higher visual quality of stego-image than most of proposed LSB based methods.

Sheng-Chih Ho, Chung-Yi Lin, Chao-Lung Chou
Shamir’s Secret Sharing Scheme in Parallel

A (k, n) threshold secret sharing scheme encrypts a secret s into n parts (called shares), which are distributed into n participants, such that any k participants can recover s using their shares, any group of less than k ones cannot. When the size of s grows large (e.g. multimedia data), the efficiency of sharing/decoding s becomes a major problem. We designed efficient and parallel implementations on Shamir’s threshold secret sharing scheme using sequential CPU and parallel GPU platforms, respectively, in a personal computer. Experimental results show that GPU could achieve an appealing speedup over CPU when dealing with the sharing of multimedia data.

Shyong Jian Shyu, Ying Zhen Tsai

Network Security and Applications

Frontmatter
A Cognitive Global Clock Synchronization Protocol in WSNs

Clock synchronization is a crucial issue for data fusion, localization, duty cycle scheduling, and topology management in wireless sensor networks (WSNs). In this paper we proposed a cognitive global clock synchronization protocol (CGCSP) that is an accurate, energy efficient and reliable clock synchronization protocol in WSNs based on a single reference node. The CGCSP tackles the disadvantages of being single reference node by a cognitive switchover mechanism. This structure has been validated through the development of basic synchronization schemes i.e. sender-receiver (S-R) synchronization and receiver-receiver (R-R) synchronization. By evaluating and comparing the performances of it with state of the art protocols such as reference broadcast synchronization (RBS) and timing-sync protocol for sensor networks (TPSN), the proposed CGCSP shows reasonable lead over them in single hop network as well as in multi hop networks in terms of average synchronization accuracy and energy efficiency.

Bilal Ahmad, Ma Shiwei, Fu Qi
A Generic Web Application Testing and Attack Data Generation Method

With the advances of diversified online services, there is an increasing demand for web applications. However, most web applications contain critical bugs affecting their security, allowing unauthorized access and remote code execution. It is challenging for programmers to identify potential vulnerabilities in their applications before releasing the service due to the lack of resources and security knowledge, and thus such hidden defects may remain unnoticed for a long time until being reported by users or third-party risk exposure. In this paper, we develop an automated detection method to support timely and flexible discovery of a wide variety of vulnerability types in web applications. The key insight of our work is adding a lightweight detecting sensor that differentiates attack types before performing symbolic execution. Based on the technique of symbolic execution, our work generates testing and attack data by tracking the address of program instruction and checking the arguments of dangerous functions. Compared to prior analysis tools that also use symbolic execution, our work flexibly supports the detection of more types of web attacks and improve system flexibility for users thanks to the detecting sensor. We have evaluated our solution by applying this detecting process to several known vulnerabilities on open-source web applications and CTF (Capture The Flag) problems, and detected various types of web attacks successfully.

Hsiao-Yu Shih, Han-Lin Lu, Chao-Chun Yeh, Hsu-Chun Hsiao, Shih-Kun Huang
The Study of Improvement and Risk Evaluation for Mobile Application Security Testing

The popularity of mobile devices has caused them to become indispensable to, and because of increasing dependency on mobile devices following sharp growth in mobile device applications, effective security testing specifications have become essential. However, developers do not prioritize security during mobile application development, causing unscrupulous individuals to exploit loopholes or vulnerabilities in the applications or develop malicious applications to steal sensitive user data, resulting in user information leakage and financial losses. The security specifications for mobile device applications in Taiwan regarding data authorization, data storage, data protection, transmission protocol, transmission protection, application execution, application security, system execution, and system security remain inadequate. Mobile device testing specifications were analyzed in this study, and the specification priorities of documents across countries were categorized. The Open Web Application Security Project and National Institute of Standards and Technology were used as the specification standard with the Cloud Security Alliance’s white paper on mobile device specifications to provide more complete security testing specifications for mobile applications. Recommendations were provided based on the testing procedures, improvement methods, and risk assessment of the test items to reduce personal information leakage and financial losses.

Huey-Yeh Lin, Hung-Chang Chang, Yung-Chuan Su
Application of Pattern for New CAPTCHA Generation Idea

Application of pattern for new CAPTCHA generation idea aims to present concept that applies mathematics theory. In this study, pattern is chosen for CAPTCHAs generating. There are 400 participants who approaching this study on the internet. Three type of pattern CAPTCHAs with two sample were study. There are shape pattern, color pattern and shape-color pattern. Amount of first correct answers, amount of total answers, percentage of success, amount of spent time and five point usability score were collected. The result shows that the most amount of first correct answers is color-shape pattern at 363. It also conforms to Color-shape CAPTCHA which shows the highest percentage of success at 97.06. In the amount of spent time, shape-color pattern CAPTCHAs indicates least time to solve at 5.25 s. The total spent time to find the correct answer of all type CAPTCHAs are 5.25 to 8.87 s. Usability score result shows that shape pattern CAPTCHAs is the highest score in all aspects at 4.34 with non-difference significant at p-value < 0.01. The approach rates over level 4.00 in all, which means the approach feels all type of Pattern CAPTCHAs practical is useful.

Thawatwong Lawan

Digital Forensics and Mobile Systems

Frontmatter
An Automatic Approach of Building Threat Patterns in Android

Nowadays, handheld devices have become popular but volume of malwares on mobile platform has also grown rapidly. To detect mobile malware, static approaches and dynamic approaches are two common ways used to analyze suspicious applications. Dynamic approaches detect malware base on the actual behaviors of applications, but how to trigger malicious behavior and the efficient of dynamic approaches are the difficulties of this kind of approaches. Due to the limited resource of mobile devices, static analysis approach is the practicable way to detect malwares on mobile device. Anti-virus software is the typical paradigm of static analysis approach. However, the effectiveness of Anti-virus software rely on its signatures. How to find an efficient and automatic way to build thread pattern of mobile malware is a critical issue to detect new or zero-day malware.In this paper, a detect mechanism based on data flow is proposed. The proposed system analyzes the function calls and the data flow to identify malicious behaviors in Android mobile devices. Machine learning approach is used to build threat patterns automatically within a great volume of applications. The experimental result shows that the proposed system could detect malware with high accuracy and low false positive rate.

Chia-Mei Chen, Yu-Hsuan Tsai, Gu-Hsin Lai
Identifying Temporal Patterns Using ADS in NTFS for Digital Forensics

The storage and handling of alternate data stream (ADS) in NTFS have posted significant challenges for law enforcement agencies (LEAs). ADS can hide data as any formats in additional $DATA attributes of digital file. The process of data content will update some metadata attributes of date-time stamp in files. This paper introduces ADS and reviews the literature pertaining to the forensic analysis of its data hiding. It describes some temporal patterns for evaluating if ADS are hidden in digital files or not. The analysis of file metadata assists in accurately correlating activities from date-time stamp evidence. The results demonstrate the effectiveness of temporal patterns for digital forensics across various types of file operations.

Da-Yu Kao, Yuan-Pei Chan
Ant-Based Botnet C&C Server Traceback

Botnets can cause significant security threat and huge loss to organizations, and are difficult to discover their existence; therefore they have become one of the most severe threats on the Internet. The core component of botnets is their command and control server (C2 server or C&C server) through which the bot herder instructs zombie machines to launch attacks. A commonly used protocol, such as IRC (Internet Relay Chat) or HTTP, is adopted to communicate between bot ma-chines and the server. In addition, some advanced botnets might have multiple C2 servers to evade detection and to extend the life time. Therefore, identifying the C2 server is important to prevent botnet attacks or further damage. In this paper, detection scheme based on ant colony optimization algorithm is proposed to identify the paths from bot machines to the C2 server. The results show that the proposed detection can identify botnet servers efficiently.

Chia-Mei Chen, Gu-Hsin Lai

Public Key Systems and Data Processing

Frontmatter
T-Brain: A Collaboration Platform for Data Scientists

When data were generated easily and rapidly with mobile services and computing power can increase on demand with the cloud computation service, data scientists who work with huge data can solve challenging problems. Smart intelligent applications such as Go, healthcare and self-driving vehicles show great improvement recently. In addition to those problems, there are still more complex problem such as weather impacts analysis, financial crisis prediction and crime prevention and so on. To overcome those challenging problems, many crossdisciplinarity or interdisciplinary experts have to collaborate for the solutions. In the paper, we propose a collaboration platform and a system design for data scientists to share data, write analytic scripts and discuss topics related with those problems. In current status, eleven dataset were collect ed such as spam mail, malware data, honeynet log, Hadoop workload log and some other open data and based on those dataset and improvement local cache design (i.e., average response time improvement 92.36% and request availability improvement 70%). With the platform, many education and competition activities can be hold successfully on the collaboration platform.

Chao-Chun Yeh, Sheng-An Chang, Yi-Chin Chu, Xuan-Yi Lin, Yichiao Sun, Jiazheng Zhou, Shih-Kun Huang
Feature Extraction in Security Analytics: Reducing Data Complexity with Apache Spark

Feature extraction is the first task of pre-processing input logs in order to detect cybersecurity threats and attacks while utilizing machine learning. When it comes to the analysis of heterogeneous data derived from different sources, this task is found to be time-consuming and difficult to be managed efficiently. In this paper we present an approach for handling feature extraction for security analytics of heterogeneous data derived from different network sensors. The approach is implemented in Apache Spark, using its python API, named pyspark.

Dimitrios Sisiaridis, Olivier Markowitch
Storage-Saving Bi-dimensional Privacy-Preserving Data Aggregation in Smart Grids

Recently, lots of works on power consumption data aggregation have been proposed for the privacy-preservation of users against the operation center in smart grids. This is the user-based data aggregation, which accumulates the power consumption data of a group of users for every time unit. On the other hand, the accumulation of a user’s data in a group of time units will facilitate the queries on the user’s accumulated power usage in these specified time units, which is time-based data aggregation. It enables the operation center to perform individual energy consumption statistics and management and offer customized services. If a data aggregation scheme provides both user-based and time-based data aggregation, it is said to be bi-dimensional. This manuscript presents the first privacy-preserving bi-dimensional data aggregation scheme, where the storage cost only linearly increases with the number of time units and is independent of the number of users.

Chun-I Fan, Yi-Fan Tseng, Yi-Hui Lin, Fangguo Zhang
Verifying the Validity of Public Key Certificates Using Edge Computing

Edge computing, which is performed near client, or edge computing together with could computing is expected to provide services with better efficiency than only cloud computing. Meanwhile, most of existing public-key certificate verification methods such as OCSP do not simultaneously achieve efficiency and security with an enough high level. In this paper, we propose a certificate verification method using edge computing and show that our proposed method achieves efficiency with an enough security level by evaluating it through the implementation.

Shogo Kitajima, Masahiro Mambo

Blockchain Applications in Technology

Frontmatter
A Study on Blockchain-Based Circular Economy Credit Rating System

Circular economy is distinct from the linear economy model in the past. Circular economy emphasizes regeneration instead of possession of resource, and proposes using shared resources to create new supply chains and new economies. When practicing circular economy, prior to collaboration, each economic entity must learn of each other’s credit rating. This study applies the blockchain technology to establish each economic entity’s transaction details, and then employs confidence level algorithms to calculate each entity’s credit rating; the method utilizes the concept of decentralization to reduce third-party broker fees, which, aside from decreasing transaction costs, provides effective credit rating of public economic entities.

Hsin-Te Wu, Yi-Jen Su, Wu-Chih Hu
Using Blockchain to Support Data and Service Management in IoV/IoT

Two required features of a data monetization platform are query and retrieval of the metadata of the resources to be monetized. Centralized platforms rely on the maturity of traditional NoSQL database systems to support these features. These databases for example MongoDB allows for very efficient query and retrieval of data it stores. However, centralized platforms come with a bag of security and privacy concerns, making them not the ideal approach for a data monetization platform. On the other hand, most existing decentralized platforms are only partially decentralized. In this research, we developed Cowry, a platform for publishing of metadata describing available resources (data or services), discovery of published resources including fast search and filtering. Our main contribution is a fully decentralized architecture that combines blockchain and traditional distributed database to gain additional features such as efficient query and retrieval of metadata stored on the blockchain.

Obaro Odiete, Richard K. Lomotey, Ralph Deters
A Blockchain-Based Traceable Certification System

In recent years, product records become more common to merchandize sold in retail stores, but the current product record system used today can’t assure product’s quality after products were transported through the whole supply chain. During transportation, merchandise may be damaged accidentally or condition changed. Those events do not get recorded because records are predominantly focused by manufacturers. Also in second hand market, product record may be tampered or verification is weak. Nonexperiences buyers can’t distinguish counterfeit because records are not trustworthy and outdated. By using the concept borrowed from Bitcon, the advantage of the blockchain can be applied to the product record system. Because of the characteristics of the blockchain such as: decentralization, openness, and immutability, which can improve the system. To achieve the goal, ownership of products is introduced and smart contract is also embedded to further enhance the product record system.

Po-Yeuan Chang, Min-Shiang Hwang, Chao-Chen Yang
Backmatter
Metadata
Title
Security with Intelligent Computing and Big-data Services
Editors
Sheng-Lung Peng
Shiuh-Jeng Wang
Prof. Valentina Emilia Balas
Ming Zhao
Copyright Year
2018
Electronic ISBN
978-3-319-76451-1
Print ISBN
978-3-319-76450-4
DOI
https://doi.org/10.1007/978-3-319-76451-1

Premium Partner