Skip to main content

Über dieses Buch

This book constitutes the proceedings of the 16th International Conference on Distributed Computing and Internet Technology, ICDCIT 2020, held in Bhubaneswar, India, in January 2020. The 20 full and 3 short papers presented in this volume were carefully reviewed and selected from 110 submissions. In addition, the book included 6 invited papers. The contributions were organized in topical sections named: invited talks; concurrent and distributed systems modelling and verification; cloud and grid computing; social networks, machine learning and mobile networks; data processing and blockchain technology; and short papers.



Invited Talks


Distributed Graph Analytics

Graph Analytics is important in different domains: social networks, computer networks, and computational biology to name a few. This paper describes the challenges involved in programming the underlying graph algorithms for graph analytics for distributed systems with CPU, GPU, and multi-GPU machines and how to deal with them. It emphasizes how language abstractions and good compilation can ease programming graph analytics on such platforms without sacrificing implementation efficiency.
Y. N. Srikant

A Distributed and Trusted Web of Formal Proofs

Most computer checked proofs are tied to the particular technology of a prover’s software. While sharing results between proof assistants is a recognized and desirable goal, the current organization of theorem proving tools makes such sharing an exception instead of the rule. In this talk, I argue that we need to turn the current architecture of proof assistants and formal proofs inside-out. That is, instead of having a few mature theorem provers include within them their formally checked theorems and proofs, I propose that proof assistants should sit on the edge of a web of formal proofs and that proof assistant should be exporting their proofs so that they can exist independently of any theorem prover. While it is necessary to maintain the dependencies between definitions, theories, and theorems, no explicit library structure should be imposed on this web of formal proofs. Thus a theorem and its proofs should not necessarily be located at a particular URL or within a particular prover’s library. While the world of symbolic logic and proof theory certainly allows for proofs to be seen as global and permanent objects, there is a lot of research and engineering work that is needed to make this possible. I describe some of the required research and development that must be done to achieve this goal.
Dale Miller

Prospero’s Books: A Distributed Architecture for AI

Invited Extended Abstract
This preliminary note and its sequels present a distributed architecture for AI (Artificial Intelligence) based on a novel market microstructure. The underlying game theory is based on Information-Asymmetric (Signaling) games, where deception is tamed by costly signaling. The signaling, in order to remain honest (e.g., separating), may involve crypto-tokens and distributed ledgers. Here, we will present a rough sketch of the architecture and the protocols it involves. Mathematical and computational analyses will appear in the subsequent sequels.
Bhubaneswar (Bud) Mishra

Trust: Anthropomorphic Algorithmic

Computer Science often emulates humanlike behaviours including intelligence that has taken to storms in every other domain where human deals with. A computing system with a defined role and goal is called an agent with humanlike capability for decision making in dynamic and complex real world it is situated in. Largely this aspect of a computing unit needs ability to learn, act and forecast. Broadly the study in Artificial Intelligence (AI) also deals with such aspects. Researchers of both the schools of computing viz. Intelligent Agents and AI systems, propose several algorithms to emulate human like behaviours. These algorithms here, are labelled as anthropomorphic algorithms. In particular, here our discussion is focussed on trust. The idea of trust as conceptualised, computed and applied in different domains, is discussed. Further, it points out the dimensions that need to be looked at, in order to endow computing systems with trust as a humanitics.
Hrushikesha Mohanty

A Very Gentle Introduction to Multiparty Session Types

Multiparty session types (MPST) are a formal specification and verification framework for message-passing protocols without central control: the desired interactions at the scale of the network itself are specified into a session (called global type). Global types are then projected onto local types (one for each participant), which describe the protocol from a local point of view. These local types are used to validate an application through type-checking, monitoring, and code generation. Theory of session types guarantees that local conformance of all participants induces global conformance of the network to the initial global type. This paper provides a very gentle introduction of the simplest version of multiparty session types for readers who are not familiar with session types nor process calculi.
Nobuko Yoshida, Lorenzo Gheri

Constructing Knowledge Graphs from Data Catalogues

We have witnessed about a decade’s effort in opening up government institutions around the world by making data about their services, performance and programmes publicly available on open data portals. While these efforts have yielded some economic and social value particularly in the context of city data ecosystems, there is a general acknowledgment that the promises of open data are far from being realised. A major barrier to better exploitation of open data is the difficulty in finding datasets of interests and those of high value on data portals. This article describes how the implicit relatedness and value of datasets can be revealed by generating a knowledge graph over data catalogues. Specifically, we generate a knowledge graph based on a self-organizing map (SOM) constructed from an open data catalogue. Following this, we show how the generated knowledge graph enables value characterisation based on sociometric profiles of the datasets as well as dataset recommendation.
Adegboyega Ojo, Oladipupo Sennaike

Concurrent and Distributed Systems Modelling and Verification


Round-Message Trade-Off in Distributed Steiner Tree Construction in the CONGEST Model

The Steiner tree problem is one of the fundamental optimization problems in distributed graph algorithms. Recently Saikia and Karmakar [27] proposed a deterministic distributed algorithm for the Steiner tree problem that constructs a Steiner tree in \(O(S + \sqrt{n} \log ^* n)\) rounds whose cost is optimal upto a factor of \(2(1 - 1/\ell )\), where n and S are the number of nodes and shortest path diameter [17] respectively of the given input graph and \(\ell \) is the number of terminal leaf nodes in the optimal Steiner tree. The message complexity of the algorithm is \(O(\varDelta (n - t) S + n^{3/2})\), which is equivalent to \(O(mS + n^{3/2})\), where \(\varDelta \) is the maximum degree of a vertex in the graph, t is the number of terminal nodes (we assume that \(t < n\)), and m is the number of edges in the given input graph. This algorithm has a better round complexity than the previous best algorithm for Steiner tree construction due to Lenzen and Patt-Shamir [21]. In this paper we present a deterministic distributed algorithm which constructs a Steiner tree in \(\tilde{O}(S + \sqrt{n})\) rounds and \(\tilde{O}(mS)\) messages and still achieves an approximation factor of \(2(1 - 1/\ell )\). Note here that \(\tilde{O}(\cdot )\) notation hides polylogarithmic factors in n. This algorithm improves the message complexity of Saikia and Karmakar’s algorithm by dropping the additive term of \(O(n^{3/2})\) at the expense of a logarithmic multiplicative factor in the round complexity. Furthermore, we show that for sufficiently small values of the shortest path diameter \((S=O(\log n))\), a \(2(1 - 1/\ell )\)-approximate Steiner tree can be computed in \(\tilde{O}(\sqrt{n})\) rounds and \(\tilde{O}(m)\) messages and these complexities almost coincide with the results of some of the singularly-optimal minimum spanning tree (MST) algorithms proposed in [9, 12, 23].
Parikshit Saikia, Sushanta Karmakar

An Efficient Message Transmission and Verification Scheme for VANETs

The vehicular ad-hoc network (VANET) is used for communication between vehicles in the same vicinity and road-side-units (RSUs). Since they communicate over a public channel, a secure data transmission protocol is necessary to exchange confidential information. Moreover, the number of vehicles is increasing day-by-day and thus, the receiver gets myriad messages from nearby RSUs and vehicles. As a result, the verification time plays a major role in vehicular communication scheme. Further, providing anonymous communication between vehicles is also a challenging task. To overcome these concerns, in this paper, we propose an identity-based signature protocol using the batch verification concept for VANETs. This paper aims to decrease the computational time (during the communication), and the proposed scheme resists various security attacks, i.e., man-in-the-middle, impersonation, replay, Sybil, and password guessing. Moreover, the proposed protocol achieves better performance results (for computational time, communication overhead, and storage cost) on Raspberry Pi 3B+ compared to other existing data transmission schemes.
Kunal Bajaj, Trupil Limbasiya, Debasis Das

Generalised Dining Philosophers as Feedback Control

We examine the mutual exclusion problem of concurrency through the systematic application of modern feedback control theory, by revisiting the classical problem involving mutual exclusion: the Generalised Dining Philosophers problem. The result is a modular development of the solution using the notions of system and system composition in a formal setting that employs simple equational reasoning. The modular approach separates the solution architecture from the algorithmic minutiae and has the benefit of simplifying the design and correctness proofs.
Two variants of the problem are considered: centralised and distributed topology with N philosophers. In each case, solving the Generalised Dining Philosophers reduces to designing an appropriate feedback controller.
Venkatesh Choppella, Arjun Sanjeev, Kasturi Viswanath, Bharat Jayaraman

Verifying Implicitly Quantified Modal Logic over Dynamic Networks of Processes

When we consider systems with process creation and exit, we have potentially infinite state systems where the number of processes alive at any state is unbounded. Properties of such systems are naturally specified using modal logics with quantification, but they are hard to verify even over finite state systems. In [11] we proposed \(\textsf {IQML}\), an implicitly quantified modal logic where we can have assertions of the form every live agent has an \(\alpha \)-successor, and presented a complete axiomatization of valid formulas. Here we show that model checking for \(\textsf {IQML}\) is efficient even when we consider systems with infinitely many processes, provided we can efficiently present such collections of processes, and check non-emptiness of intersection efficiently. As a case study, we present a model checking algorithm over systems in which at any state, the collection of live processes is regular.
Anantha Padmanabha, R. Ramanujam

Cloud and Grid Computing


Secure Content-Based Image Retrieval Using Combined Features in Cloud

Secure Content-Based Image Retrieval (SCBIR) is gaining enormous importance due to its applications involving highly sensitive images comprising of medical and personally identifiable data such as clinical decision-making, biometric-matching, and multimedia search. SCBIR on outsourced images is realized by generating secure searchable index from features like color, shape, and texture in unencrypted images. We focus on enhancing the efficiency of SCBIR by combining two visual descriptors which serve as a modified feature descriptor. To improve the efficacy of search, pre-filter tables are generated using Locality Sensitive Hashing (LSH) and resulting adjacent hash buckets are joined to enhance retrieval precision. Top-k relevant images are securely retrieved using Secure k-Nearest Neighbors (kNN) algorithm. Performance of the scheme is evaluated based on retrieval precision and search efficiency on distinct and similar image categories. Experimental results show that the proposed scheme outperforms the existing state of the art SCBIR schemes.
J. Anju, R. Shreelekshmi

Design of a Scheduling Approach for Budget-Deadline Constrained Applications in Heterogeneous Clouds

The notion of seemingly infinite resources and the dynamic provisioning of these resources on rental premise fascinated the execution of scientific applications in the cloud. The scheduling of the workflows under the utility model is always constrained to some QoS (Quality of Service). Generally, time and cost are considered to be the most important parameters. Scheduling of workflows becomes more challenging when both the time and cost factors are considered simultaneously. Therefore, most of the algorithms have been designed considering either time or cost factor. Hence, to handle the scheduling problem, in this paper, a novel heuristic algorithm named SDBL (Sub-deadline and Budget level) workflow scheduling algorithm for the heterogeneous cloud has been proposed. The proposed methodology effectively utilizes the deadline and budget constrained workflows. The novel strategy of distributing deadline as the level deadline (sub-deadline) to each level of workflow and the mechanism of budget distribution to every individual task satisfies the given constraints and results the exceptional performance of SDBL. SDBL strives to produce a feasible schedule meeting the deadline and the budget constraints. The PSR (Planning Success Rate) is utilized to show the efficiency of the proposed algorithm. For simulation, real workflows were exploited over the methodologies such as SDBL (Sub-deadline and budget level workflow scheduling algorithm), BDSD, BHEFT (Budget Constraint Heterogeneous Earliest Finish Time), and HBCS (Heterogeneous Budget Constrained Scheduling). The comprehensive experimental evaluation demonstrates the effectiveness of the proposed methodology in terms of higher PSR in most cases.
Naela Rizvi, Dharavath Ramesh

Resource Scheduling for Tasks of a Workflow in Cloud Environment

In recent days most of the enterprises and communities adopt cloud services to deploy their workflow-based applications due to the inherent benefits of cloud-based services. These workflow-based applications are mainly compute-intensive. The major issues of workflow deployment in a cloud environment are minimizing execution time (makespan) and monetary cost. As cloud service providers maintain adequate infrastructural resources, workflow scheduling in the cloud environment becomes a non-trivial task. Hence, in this paper, we propose a scheduling technique where monetary cost is reduced, while workflow gets completed within its minimum makespan. To analyze the performance of the proposed algorithm, the experiment is carried out in WorkflowSim and compares the results with the existing well-known algorithms, Heterogeneous Earliest Finish Time (HEFT) and Dynamic Heterogeneous Earliest Finish Time (DHEFT). In all the experiments, the proposed algorithm outperforms the existing ones.
Kamalesh Karmakar, Rajib K Das, Sunirmal Khatua

Bearing Fault Classification Using Wavelet Energy and Autoencoder

Today’s modern industry has widely accepted the intelligent condition monitoring system to improve the industrial organization. As an effect, the data-driven-based fault diagnosis methods are designed by integrating signal processing techniques along with artificial intelligence methods. Various signal processing approaches have been proposed for feature extraction from vibration signals to construct the fault feature space, and thus, over the years, the feature space has increased rapidly. Also, the challenge is to identify the promising features from the space for improving diagnosis performance. Therefore, in this paper, wavelet energy is presented as an input feature set to the fault diagnosis system. In this paper, wavelet energy is utilized to represent the multiple faults for reducing the requirement of number features, and therefore, the complex task of feature extraction becomes simple. Further, the convolutional autoencoder has assisted in finding more distinguishing fault feature from wavelet energy to improve the diagnosis task using extreme learning machine. The proposed method testified using two vibration datasets, and decent results are achieved. The effect of autoencoder on fault diagnosis performance has been observed in comparison to principal component analysis (PCA). Also, the consequence has seen in the size of the extreme learning machine (ELM) architecture.
Sandeep S. Udmale, Sanjay Kumar Singh

Social Networks, Machine Learning and Mobile Networks


Community Detection in Social Networks Using Deep Learning

Community structure is found everywhere from simple networks to real world complex networks. The problem of community detection is to predict clusters of nodes that are densely connected among themselves. The task of community detection has a wide variety of applications ranging from recommendation systems, advertising, marketing, epidemic spreading, cancer detection etc. The two mainly existing approaches for community detection, namely, stochastic block model and modularity maximization model focus on building a low dimensional network embedding to reconstruct the original network structure. However the mapping to low dimensional space in these methods is purely linear. Understanding the fact that real world networks contain non-linear structures in abundance, aforementioned methods become less practical for real world networks. Considering the nonlinear representation power of deep neural networks, several solutions based on autoencoders are being proposed in the recent literature. In this work, we propose a deep neural network architecture for community detection wherein we stack multiple autoencoders and apply parameter sharing. This method of training autoencoders has been successfully applied for the problems of link prediction and node classification in the literature. Our enhanced model with modified architecture produced better results compared to many other existing methods. We tested our model on a few benchmark datasets and obtained competitive results.
M. Dhilber, S. Durga Bhavani

Multi-Winner Heterogeneous Spectrum Auction Mechanism for Channel Allocation in Cognitive Radio Networks

Fair allocation of unused licensed spectrum among preferable secondary users (SUs) is an important feature to be supported by Cognitive Radio (CR) for its successful deployment. This paper proposes an auction theoretic model for spectrum allocation by incorporating different constraints of a CR network and intents to achieve an effective utilization of the radio spectrum. We consider heterogeneous channel condition which allows SUs to express their preference of channels in terms of bid values. A sealed-bid concurrent bidding policy is implemented which collects bids from SUs while dealing with variation in availability time of the auctioned channels. The proposed model develops a truthful winner determination algorithm which exploits spectrum reusability as well as restraints to dynamics in spectrum opportunities amongst SUs. Allowing multiple non-interfering SUs to utilize a common channel significantly helps to achieve an improved spectrum utility. However, every SU acquires at most one channel. The proposed model also develops a pricing algorithm which helps the auctioneer to earn a revenue from every wining bidder. Simulation based results demonstrate the effectiveness in performance of the proposed model in terms of spectrum utilization compared to the general concurrent auction model.
Monisha Devi, Nityananda Sarma, Sanjib K. Deka

A Hybrid Approach for Fake News Detection in Twitter Based on User Features and Graph Embedding

The quest for trustworthy, reliable and efficient sources of information has been a struggle long before the era of internet. However, social media unleashed an abundance of information and neglected the establishment of competent gatekeepers that would ensure information credibility. That’s why, great research efforts sought to remedy this shortcoming and propose approaches that would enable the detection of non-credible information as well as the identification of sources of fake news. In this paper, we propose an approach which permits to evaluate information sources in term of credibility in Twitter. Our approach relies on node2vec to extract features from twitter followers/followees graph. We also incorporate user features provided by Twitter. This hybrid approach considers both the characteristics of the user and his social graph. The results show that our approach consistently and significantly outperforms existent approaches limited to user features.
Tarek Hamdi, Hamda Slimi, Ibrahim Bounhas, Yahya Slimani

Online Context-Adaptive Energy-Aware Security Allocation in Mobile Devices: A Tale of Two Algorithms

Cryptographic operations involved in securing communications are computationally intensive and contribute to energy drain in mobile devices. Thus, varying the level of security according to the user’s location may provide a convenient solution to energy management. Context-adaptive energy-aware security allocation for mobile devices is modeled in this paper as a combinatorial optimization problem. The goal is to allocate security levels effectively so that user utility is maximized while bounding the maximum energy cost to a constant E. Although the offline version of the problem has been previously studied in the literature where both the security levels and the locations to which a user may travel to is known a priori, this is the first work that formulates and solves an online version of the problem where the locations may be known a priori but the security levels are revealed only upon reaching the locations. We provide two different algorithms for the solution of this online problem by mapping it to the online multi-choice knapsack problem. We study competitive ratios of our two algorithms by comparing the solutions they yield to the optimal solution obtained for the corresponding offline problem. We also present simulation experiments on realistic datasets to validate the scalability and efficiency of our approaches (taking in order of milliseconds for up to 100 locations and providing near-optimal competitive ratios).
Asai Asaithambi, Ayan Dutta, Chandrika Rao, Swapnoneel Roy

A Framework Towards Generalized Mid-term Energy Forecasting Model for Industrial Sector in Smart Grid

Smart Grid is emerging as one of the most promising technologies that will provide several improvements over the traditional power grid. Providing availability is a significant concern for the power sector, and to achieve an uninterrupted power supply accurate forecasting is essential. In the implementation of the future Smart Grid, efficient forecasting plays a crucial role, as the electric infrastructure will work, more and more, by continuously adjusting the electricity generation to the total end-use load. Electricity consumption depends on a vast domain of randomly fluctuating influential parameters, and every region has its own set of parameters depending on the demographic, socioeconomic, and climate conditions of that region. Even for the same set of parameters, the degree of influence on power consumption may vary over different sectors, like, residential, commercial, and industrial. Thus it is essential to quantify the dependency level for each parameter. We have proposed a generalized mid-term forecasting model for the industrial sector to predict the quarterly energy usage of a vast geographic region accurately with a diverse range of influential parameters. The proposed model is built and tested on real-life datasets of industrial users of various states in the U.S.
Manali Chakraborty, Sourasekhar Banerjee, Nabendu Chaki

An Online Low-Cost System for Air Quality Monitoring, Prediction, and Warning

Air-quality is degrading in developing countries and there is an urgent need to monitor and predict air-quality online in real-time. Although offline air-quality monitoring using hand-held devices is common, online air-quality monitoring is still expensive and uncommon, especially in developing countries. The primary objective of this paper is to propose an online low-cost air-quality monitoring, prediction, and warning system (AQMPWS) which monitors, predicts, and warns about air-quality in real-time. The AQMPWS monitors and predict seven pollutants, namely, PM1.0, PM2.5, PM10, Carbon Monoxide, Nitrogen Dioxide, Ozone and Sulphur Dioxide. In addition, the AQMPWS monitors and predicts five weather variables, namely, Temperature, Pressure, Relative Humidity, Wind Speed, and Wind Direction. The AQMPWS has its sensors connected to two microcontrollers in a Master-Slave configuration. The slave sends the data to the API in the cloud through an HTTP GET request via a GSM Module. A python-based web-application interacts with the API for visualization, prediction, and warning. Results show that the AQMPWS monitor different pollutants and weather variables within range specified by pollution control board. In addition, the AQMPWS predict the value of the pollutants and weather variables for the next 30-min given the current values of these pollutants and weather variables using an ensemble model containing a multilayer-perceptron and long short-term memory model. The AQMPWS is also able to warn stakeholders when any of the seven pollutants breach pre-defined thresholds. We discuss the implications of using AQMPWS for air-quality monitoring in the real-world.
Rishi Sharma, Tushar Saini, Praveen Kumar, Ankush Pathania, Khyathi Chitineni, Pratik Chaturvedi, Varun Dutt

Word2vec’s Distributed Word Representation for Hindi Word Sense Disambiguation

Word Sense Disambiguation (WSD) is the task of extracting an appropriate sense of an ambiguous word in a sentence. WSD is an essential task for language processing, as it is a pre-requisite for determining the closest interpretations of various language-based applications. In this paper, we have made an attempt to exploit the word embedding for finding the solution for WSD for the Hindi texts. This task involves two steps - the creation of word embedding and leveraging cosine similarity to identify an appropriate sense of the word. In this process, we have considered two mostly used word2vec architectures known as Skip-Gram and Continuous Bag-Of-Words [2] models to develop the word embedding. Further, we have chosen the sense with the closest proximity to identify the meaning of an ambiguous word. To prove the effectiveness of the proposed model, we have performed experiments on large corpora and have achieved an accuracy of nearly 52%.
Archana Kumari, D. K. Lobiyal

Text Document Clustering Using Community Discovery Approach

The problem of document clustering is about automatic grouping of text documents into groups containing similar documents. This problem under supervised setting yields good results whereas for unannotated data the unsupervised machine learning approach does not yield good results always. Algorithms like K-Means clustering are most popular when the class labels are not known. The objective of this work is to apply community discovery algorithms from the literature of social network analysis to detect the underlying groups in the text data.
We model the corpus of documents as a graph with distinct non-trivial words from the whole corpus considered as nodes and an edge is added between two nodes if the corresponding word nodes occur together in at least one common document. Edge weight between two word nodes is defined as the number of documents in which those two words co-occur together. We apply the fast Louvain community discovery algorithm to detect communities. The challenge is to interpret the communities as classes. If the number of communities obtained is greater than the required number of classes, a technique for merging is proposed. The community which has the maximum number of similar words with a document is assigned as the community for that document. The main thrust of the paper is to show a novel approach to document clustering using community discovery algorithms. The proposed algorithm is evaluated on a few bench mark data sets and we find that our algorithm gives competitive results on majority of the data sets when compared to the standard clustering algorithms.
Anu Beniwal, Gourav Roy, S. Durga Bhavani

Data Processing and Blockchain Technology


An Efficient and Novel Buyer and Seller’s Distributed Ledger Based Protocol Using Smart Contracts

The emergence of Distributed Ledger systems has made us rethink things that are possible to be digitized. A lot of tedious man labour requiring systems can be converted to digital systems for various benefits. With this new technology, we have a new means to record transactions and the system inherently preserves the integrity of it. With these benefits, we could design systems that primarily deal with transactions in ease. In this paper, we have discussed the advantages of using a distributed ledger system for performing transactions and propose and implement a buyer-seller protocol for land transactions to show the advantages of using this system.
Priyanka Kumar, G. A. Dhanush, D. Srivatsa, S. Nithin Aakash, S. Sahisnu

Distributed and Lazy Auditing of Outsourced Data

Data outsourcing is useful for ICT organizations who need cost-effective management of resources. Auditing of outsourced data is an essential requirement for the data owner who outsources it to the untrusted third party storage service provider. It is a posterior mechanism for data verification. In this work, we propose a distributed auditing scheme for verifying the integrity of the outsourced data as compare to the existing centralized auditing schemes. To the best of our knowledge, this is the first such scheme uses distributed auditing. The proposed scheme avoids intense computing requirement at the data owner or the auditor. We classify the existing auditing schemes into generic classes, analyze them, and compare them with the proposed scheme. The performance evaluation of the auditing operation in the proposed scheme is carried out and verified with the analytical results.
Amit Kumar Dwivedi, Naveen Kumar, Mayank Pathela

HealthChain: A Secure Scalable Health Care Data Management System Using Blockchain

Electronic Health Records (EHRs) are designed to manage the complexities of multi-institutional and lifetime medical records. As patients move among providers, their data become scattered across different organizations, losing easy access to past records. The records should be kept in digitized format with required protection from unauthorized access. There are many schemes to solve these issues, which includes central server based systems, cloud based systems and blockchain based systems. The trending solutions are cloud based and blockchain based solutions. Even though cloud based system provides much scalability through various mechanisms, it is trusted as a secure system in which data may be corrupted with or without the knowledge of cloud provider. The blockchain technology overcomes the barrier of trusting an agent. But the blockchain based system lacks scalability and requires huge storage space. The proposed system, HealthChain, provides a secure health care record management system with scalability and low storage space. HealthChain divides the whole system into regions and uses two kinds of blockchains, Private Blockchain: for intra-regional communication and, Consortium Blockchain: for inter-regional communication. The proposed system utilizes a consensus mechanism for mining.
T. P. Abdul Rahoof, V. R. Deepthi

Transcript Management Using Blockchain Enabled Smart Contracts

Blockchain has demonstrated immense potential leading to act as one of the key catalysts in the 4th Industrial revolution. Smart Contract is the programmable transactions, which enables and enforces complex transactions on Blockchain. Blockchain-enabled Smart Contracts are driving force for ubiquitous applications of Blockchain across the fields of Health, Education, Banking and many more. In this paper, we discuss one such use case of Blockchain in the field of Education, called it Transcript Management. Whenever students move from one institute to another, they are expected to submit an official transcript from the previous institute. Currently deployed conventional method of operation is highly inefficient, lacks transparency and leaves data susceptible to forgery or tampering. This paper presents a new method involving Blockchain-enabled smart contracts for sharing and managing transcripts across different institutions. We have proposed algorithms for different modules, namely transcript issuance, verification, acceptance, and the analysis and implementation results of these modules assure both security and efficiency required for this application.
Kirtan Patel, Manik Lal Das

Short Papers


Identifying Reduced Features Based on IG-Threshold for DoS Attack Detection Using PART

Benchmark datasets are available to test and evaluate intrusion detection systems. The benchmark datasets are characterized by high volume and dimensionality curse. The feature reduction plays an important role in a machine learning-based intrusion detection system to identify relevant and irrelevant features with respect to the classification. This paper proposes a method for the identification of reduced features for the classification of Denial of Service (DoS) attack. The reduced feature technique is based on Information Gain (IG) and Threshold Limit Value (TLV). The proposed approach detects DoS attack using a reduced feature set from the original feature set with PART classifier. The proposed approach is implemented and tested on CICIDS 2017 dataset. The experimentation shows improved results in terms of performance with a reduced feature set. Finally, the performance of the proposed system is compared with the original feature set.
Deepak Kshirsagar, Sandeep Kumar

Uniform Circle Formation by Swarm Robots Under Limited Visibility

This paper proposes a distributed algorithm for uniform circle formation by multiple autonomous, asynchronous, oblivious mobile swarm robots. Each robot executes cycle of look-compute-move repeatedly. All robots agree upon a common origin and axes. Eventually an uniform circle of a given radius and center is formed without any collision or deadlock.
Moumita Mondal, Sruti Gan Chaudhuri

Histopathological Image Classification by Optimized Neural Network Using IGSA

The histopathological image classification is a vivid application for medical diagnosis and neural network has been successful in the image classification task. However, finding the optimal values of the neural network is still a challenging task. To accomplish the same, this paper considers a two-layer neural network which is optimized through intelligent gravitational search algorithm. Further, the optimized two-layer neural network is applied for the histopathological tissue classification into healthy and inflamed. The proposed method is validated on the publicly available tissue dataset, namely Animal Diagnostic Laboratory (ADL). The experimental results firm the better performance of the proposed method against state-of-the-art methods in terms of seven performance measures, namely recall, specificity, precision, false negative rate (FNR), accuracy, F1-score, and G-mean.
Himanshu Mittal, Mukesh Saraswat, Raju Pal


Weitere Informationen

Premium Partner