Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the Second International Conference on Ubiquitous Communications and Network Computing, UBICNET 2019, held in Bangalore, India, in February 2019. The 19 full papers were selected from 52 submissions and are basically arranged in different sessions on security and energy efficient computing, software defined networks, cloud computing and internet of things applications, and the advanced communication systems and networks.



Performance Analysis of Femtocell on Channel Allocation

Femtocell channel assignment is an important design criteria in cellular systems. In femtocell, access mechanisms are classified in to three classes: open access, closed access and, hybrid access. Additionally, the subscribers in the femtocell network are divided into two groups: subscriber group (SG) and non-subscriber group (NSG). Normally, some channels are reserved for SG. In this paper, five channel assignment models are briefly discussed and analyzed. The performance parameters such as blocking probability for each case is derived and analyzed. Furthermore, other parameters such as bit error rate (BER), the capacity for different path loss condition are also analyzed. We also identified the optimum reserved percentage of channel for SG so that performance remains the highest. The results show that the increase in offered traffic increases the blocking probability. Also, as the base stations increase, BER decreases and the capacity increases.
Mahesh Lalapeta, Sagar Basavaraju, Navin Kumar

System Level Performance Analysis of Designed LNA and Down Converter for IEEE 802.11ad Receiver

A low noise amplifier (LNA) operating at millimeter wave (mmWave) frequency and a down converter suitable for IEEE 802.11ad receiver is designed in a 65 nm radio frequency (RF)-CMOS low leakage (LL) process. These designed blocks are integrated in a super heterodyne receiver architecture and the overall performance of the receiver is analyzed. The designed LNA gives a performance metric of 20 dB of gain, 1.7 dB of noise figure (NF) and −7.78 dBm of IIP3. Modified Gilbert cell topology is used for down converter which gives a conversion gain of 1.5 dB from 57 GHz to 66 GHz, input P1dB of −7.8dBm and IIP3 of 8.78 dBm with RF at 57.24 GHz from a 1.2 V supply voltage and a 1Vpp of local oscillator (LO) drive. The obtained IIP3 is 10.08 dB higher than the conventional Gilbert cell and offers an error vector magnitude (EVM) improvement of −23 dB at the receiver. This work provides RF designers a comprehensive understanding of system and circuit level on pre silicon base.
S. Pournamy, Navin Kumar

Optimizing Multi Gateway Wireless Mesh Networks for Throughput Improvement

This paper applies the concept of subnet virtualization to the edge network comprising of the multi-gateway Wi-Fi mesh. A necessary and sufficient condition for improving the throughput of Wi-Fi mesh network (WMN) is proposed in the paper. A holistic approach of optimizing the mesh topology by fair distribution of gateways (GW) is developed. Subnets (partitions) are created within the mesh such that each partition has one GW and approximately equal amount of Mesh Routers. Thereafter an overload estimation process is defined which indicates the instance when the WMN is overloaded and a Load Management Scheme (LMS) has to be applied. A Steady State Load equation is derived based on the current processing load of each GW. Thereafter a stability condition is defined which can avoid triggering chain of load transitions from one neighbor GW to another. Simulation studies presented in the paper show that after providing a conventional WMN with the features of the proposed LMS, the throughput became more than double, there was a decrease of 22% in the average packet delay and a decrease of 90% in the number of packets dropped.
Soma Pandey, Govind Kadambi, Vijay Pande

Direction Finding Capability in Bluetooth 5.1 Standard

Bluetooth technology is a standard prescribed for short-range wireless communication that uses low-power radio frequency at a low cost. It is interoperable with all devices as it consumes a very small amount of energy. The Bluetooth Core Specification provided by Bluetooth Special Interest Group (SIG) will be adding direction finding feature in the Low Energy (LE) standard. This feature will enable a tracker to find the target by estimating the relative angle between the tracker and target. It uses either Angle of Arrival (AoA) or Angle of Departure (AoD) method with multiple antennas switching for direction estimation. To support this feature, the packet structure in LE physical layer is modified. The frames of the LE uncoded packets like the Protocol Datagram Unit (PDU) Header is modified and an additional frame known as Constant Tone Extension (CTE) is added to the LE packet structure. To implement the above ideas, we need to generate a portion of the LE packets in the National Instruments (NI) Bluetooth measurement toolkit. NI Bluetooth measurement toolkit is used for testing and measurement of Bluetooth RF signals. The results show that the CTE, which is needed for direction finding capability, is successfully incorporated in the BLE.5.1 packet structure.
Nitesh B. Suryavanshi, K. Viswavardhan Reddy, Vishnu R. Chandrika

An Investigation of Transmission Properties of Double-Exponential Pulses in Core-Clad Optical Fibers for Communication Application

In this paper, a comparative analysis of the propagation of double exponential and Gaussian ultra-short pulses in fused-silica core-clad optical fibers has been presented. The present study has taken the non-linear propagation parameters from Schrodinger’s equation and for silica fiber into consideration. The analysis has been carried out for single-mode and multi-mode fibers, to study the effects of variation in pulse parameters and it has been observed that the double-exponential pulses have a bandwidth-efficiency ~23% over Gaussian pulses and may be useful as femtosecond-laser pulse shapes. It is found that double exponential pulses offer more resistance to dispersive effects than Gaussian pulses at longer distances and retain more power levels for higher input powers, while Gaussian pulses continue to decay. Finally, rapid decay in double-exponential pulses may make them suitable for time-and-wavelength-division-multiplexed passive optical networks (TWDM-PON) applications in optical communication.
Anurag Chollangi, Nikhil Ravi Krishnan, Kaustav Bhowmick

Hybrid Energy Efficient and QoS Aware Algorithm to Prolong IoT Network Lifetime

The Internet of Things (IoT) consists of large amount of energy compel devices which are prefigured to progress the effective competence of several industrial applications. It is very much essential to bring down the energy utilization of every device deployed in IoT network without compromising the quality of service (QoS). Here, the difficulty of providing the operation between the QoS allocation and the energy competence for the industrial IoT application is deliberate. To achieve this objective, the multi-objective optimization problem to accomplish the aim of estimating the outage performance and the network lifetime is devised. Subsequently, proposed Hybrid Energy Efficient and QoS Aware (HEEQA) algorithm is a combination of quantum particle swarm optimization (QPSO) along with improved non dominated sorting genetic algorithm (NGSA) to achieve energy balance among the devices is proposed and later the MAC layer parameters are tuned to reduce the further energy consumption of the devices. NSGA is applied to solve the problem of multi-objective optimization and the QPSO algorithm is used to gain the finest cooperative combination. The simulation outcome has put forward that the HEEQA algorithm has attained better operation balance between the energy competence and the QoS provisioning by minimizing the energy consumption, delay, transmission overhead and maximizing network lifetime, throughput and delivery ratio.
N. N. Srinidhi, Jyothi Lakshmi, S. M. Dilip Kumar

Data Integration and Management in Indian Poultry Sector

Poultry sector- both commercial and smallholder- contribute significantly to both the financial stability of the poor and the economy of the nation. Record keeping in this sector not only ensures tight financial check of inventories but also helps in analyzing the factors affecting the quality and well-being of poultry. It is estimated that the world poultry production will increase many folds in the coming future, and so would the associated poultry health care analysis, inventory and infrastructure, climate analysis and other associated sectors. With the rise in population and growing demand for poultry product, manual record keeping and manual analysis of such huge, varied and dynamic data would turn out to be tedious and time-consuming task, causing out-of-time delivery of necessary decision. Thus, this sector is in need of software managed data analysis and data integration to ensure timely delivery of necessary action and most accurate prediction to help with the decision. We present a framework for data integration and analysis of poultry data to help in easy prediction, analysis and decision making in this sector, thus increasing the profits earned.
Susmitha Shankar, S. Thangam

CA-RPL: A Clustered Additive Approach in RPL for IoT Based Scalable Networks

Applications of the Internet of Things (IoT) span from the industrial field to the agriculture field and from the smart city to the smart city healthcare. The wireless sensors play a major role in making these applications work as they are desired. These tiny, light-weight and low battery-powered sensors make the smallest of the smallest device communicate in an IoT environment. All of these applications require hundred to thousands of nodes to solve a purpose. Routing in such energy constrained network becomes a challenging task, so scalability in IoT is one of the major challenges that need to be solved. Routing protocol for low power and lossy networks (RPL) is one of the protocols developed by the Routing Over Low Power And Lossy Networks (ROLL) group to meet the QoS requirements for various IoT based applications. However, the existing versions of RPL fail to provide better results when the number of nodes in the network is increased. Our proposed protocol Clustered Additive RPL (CA-RPL) uses a weight based clustering technique to meet the efficiency of a scalable network. In addition to that, the path selection for data transmission is done by considering three parameters namely Expected transmission count (ETX), hop count and available energy. It is observed that the proposed approach outperforms other approaches in terms of packet delivery ratio, end to end delay and energy consumption in the network.
Soumya Nandan Mishra, Suchismita Chinara

Implementation of Trilateration Based Localization Approach for Tree Monitoring in IoT

The Internet of thing is widely used term as progression in innovation. It is a system of physical devices embedded with electronic motes, sensors, and software which empower these devices to exchange data through the internet. Moreover, to localize and monitor environmental conditions on the real time basis system may use Global Positioning System framework but it expends more energy as such system compromise resource constrain devices. In this research paper, we have concentrated to develop Global Positioning System independent localization algorithm for real-time tree monitoring. Our proposed algorithm comprises of two sub-algorithms which are Received Signal Strength Indicator and trilateration. Localization uses routing of data and it consumes more energy. Hence, we require energy efficient routing protocol. To fulfill the purpose we used (RPL) Routing Protocol for Low-Power and Lossy Networks routing protocol which is developed specially for low power and lossy networks. Simulation of the whole system is carried out In Contiki-OS with the help of built-in COOJA simulator.
Naren Tada, Tejas Patalia, Shivam Trivedi

Expert System Design for Automated Prediction of Difficulties in Securing Airway in ICU and OT

The maintenance of uninterrupted patient respiratory passage (airway) and unhindered breathing is the primary duty of an anesthesiologist or other physicians involved in patient care under emergency trauma or surgical procedures in ICU (Intensive Care Unit) and Operation Theatre (OT). Anesthesiologist should ensure the full control over the patient airway management either bypassing an endotracheal tube or any other similar devices. The unanticipated difficulties in airway management are the most important contributors to airway related mishaps, if these are not managed effectively may lead to death or permanent bodily harm to the patient due to inadequate oxygenation. The recent survey reports revealed that 53% of anaesthetic deaths are either airway or respiratory related. Incidence of difficult airway among patients has been predicted to be in the range of 1.1 to 3.8%. This paper aims at identifying all the critical risk parameters contributing to difficult airway and subsequently developing a framework to automate the prediction of difficult airways well in advance. Authors have designed an expert system prototype for predicting the difficulties in airway management and suggesting appropriate remedies using machine learning algorithms.
D. K. Sreekantha, H. K. Rachana, Sripada G. Mehandale, Mohammed Javed, K. V. S. S. S. S. Sairam

A Comparative Study on Load Balancing Algorithms in Software Defined Networking

Advent of big data, cloud computing and IOTs resulted into significant increase in traffic on servers used in traditional networks as these networks are normally non-programmable, complex in management, highly expensive in nature, and have tightly coupled control plane with data plane. To overcome these traditional network-based issues a newly emerging technology software defined networking (SDN) has been introduced which decouples the data plane and control plane and makes the network fully programmable. SDN controllers are programmable so an efficient load balancing algorithms must ensure the effective management of resources as per client’s request. Based on these parameters i.e. throughput, transaction rate, & response time the qualitative comparison between the load balancing algorithms of SDN is done to generate the best results.
Neha Joshi, Deepak Gupta

CloudSDN: Enabling SDN Framework for Security and Threat Analytics in Cloud Networks

The “Software-Defined Networking (SDN), Network Function Virtualization (NFV)” are recent network paradigms and “OpenStack”, a widely deployed Cloud management platform. The goal of this presented research work is to integrate the SDN, NFV into OpenStack based Cloud platform, draw practical insights in their inter-play, to solve the problems in the Cloud network orchestration and applications security. We review key prior works in this intersection of SDN, NFV and Cloud computing domain. The OpenStack based Cloud deployment integrates SDN through its Neutron module, which has major practical limitations with respect to scalability, security and resiliency. Aiming at some critical problems and overall Cloud security, we postulate certain SDN scheme that can distribute its own Network Function (NF) agents across the dataplane and deploy applications across the control plane that centralizes the network management and orchestration. A novel security scheme for Cloud Networks “CloudSDN”, enabling SDN framework for Cloud security is proposed and implemented, addressing some well-known security issues in Cloud networks. We demonstrate the efficacy of the attack detection and mitigation system, under Distributed Denial of Service (DDoS) attacks on the Cloud infrastructure and on to downstream servers as well. We also present a comparative study with legacy security approaches and with classical SDN implementations. We also share our future perspectives on exploiting the myriad of features of SDN such as global view, distributed control, network abstractions, programmability and mitigating its security issues.
Prabhakar Krishnan, Krishnashree Achuthan

NB-FTBM Model for Entity Trust Evaluation in Vehicular Ad Hoc Network Security

Vehicular Ad hoc network (VANET) is developed for exchanging valuable information among vehicles. Therefore they need to ensure the reliability of the vehicle which is sending data. Trustworthiness could be achieved based on two methods. The first method is creating entity trust and the second one is data trust. This research focuses on evaluating the trustworthiness of the sender entity (vehicle). This paper proposes NB-FTBM: Naive Bayesian Fuzzy Trust Boundary Model to find entity trust. NB-FTBM contains two modules namely Entity Identification (E-ID) and Entity Reputation (E-RP). The proposed model quickly identifies the entity identification score and entity reputation score of an entity. These scores fall under the trust boundary line. Based on this boundary level the entity is allowed to take the necessary decision for the information received. The main advantage of this approach is it takes the benefit of Naive Bayesian classifier along with fuzzy logic. The proposed trust model evaluates the trustworthiness of the metrics accurately.
S. Sumithra, R. Vadivel

Threshold Cryptography Based Light Weight Key Management Technique for Hierarchical WSNs

Secure communication among sensors is strongly needed to avoid malicious activity. Security is a major issue in self-organized, infrastructure less networks with limited resources such as energy, transmission range, and processing. The amount of network overhead needs to be reduced to improve the network performance. The size of the secret key to be communicated among the sensor nodes is also contributing in network performance. The Proposed Light Weight Threshold Key Management Scheme (LWKMS), reduces the size of the secret key to be communicated. It reduces the network resource utilization, in sharing the secret among the sensor nodes in the network and provides efficient security even when the keys are compromised by an attacker node. Simulation results shows that the proposed light weight scheme provides less overhead along with less energy consumption as compared to existing method namely Group Key Management Scheme (GKMS).
K. Hamsha, G. S. Nagaraja

Denoising Epigraphical Estampages Using Nested Run Length Count

Denoising in epigraphical document analysis helps in building recognition system for fast and automatic processing. However, it is challenging due to the presence of stone texture as a complex background in input samples. In this paper, a nested run length counting with varying block size of 3 * 3, 5 * 5 and 7 * 7 are applied. Computation is carried out on neighboring pixels of the point of interest and discloses whether it is part of the script on inscription or background based on the count value. If it is part of the background, point of interest is set to background value else set to white. The method is tried and tested on 100 samples of epigraphical Estampages collected from archaeological survey of India. A comparative study is derived on the output of the proposed method and on the nonlinear filters such as median and wiener. Human vision perception has evaluated that proposed method is better than median and wiener filters. The quality measures such as Peak signal to noise ratio and Structural similarity indexes are practiced on the sample output for various filters and proposed method.
P. Preethi, K. Praneeth Kumar, M. Sumukha, H. R. Mamatha

Improving Transition Probability for Detecting Hardware Trojan Using Weighted Random Patterns

Computer system security has related to security of software or the information processed. The underlying hardware used for information processing has considered to as trusted. The emerging attacks from Hardware Trojans (HTs) violate this root of trust. The attacks are in the form of malicious modification of electronic hardware at different stages; possess major security concern in the electronic industries. An adversary can mount HT in a net of the circuit, which has low transition probability. In this paper, the improvement of the transition probability by using test points and weighted random patterns is proposed. The improvement in the transition probability can accelerate the detection of HTs. This paper implements weighted random number generator techniques to improve the transition probability. This technique is evaluated on ISCAS 85’ benchmark circuit using PYTHON and SYNOPSYS TETRAMAX tool.
Kshirod Chandra Mohapatra, M. Priyatharishini, M. Nirmala Devi

Prefix Tree Based MapReduce Approach for Mining Frequent Subgraphs

The frequent subgraphs are the subgraphs which appear in a number, more than or equal to a user-defined threshold. Many algorithms assume that the apriori based approach yields an efficient result for finding frequent subgraphs, but in our research, we found out that Apriori algorithm lacks scalability with the main memory. Frequent subgraph mining using Apriori algorithm with FS tree uses adjacency list representation. FS tree is a prefix tree data structure. It implements the algorithm in two phases. In the first phase, it uses the Apriori algorithm to find frequent two edge subgraphs. In the second phase, it uses FS-tree algorithm to search all the frequent subgraphs from frequent two edge subgraphs. Scanning the dataset for every candidate is the drawback of the Apriori algorithm, so the Apriori algorithm with FS-tree is used to overcome the multiple scanning. This algorithm is also implemented in an assumption that the data set fits well in memory. In this paper, we propose parallel map-reduce based frequent subgraph mining technique performed in a distributed environment on the Hadoop framework. The experiments validate the efficiency of the algorithm for generating frequent subgraphs in large graph datasets.
Supriya Movva, Saketh Prata, Sai Sampath, R. G. Gayathri

A Taxonomy of Methods and Models Used in Program Transformation and Parallelization

Developing Application and System Software in a High level programming language, has greatly improved programmer productivity, by reducing the total time and effort spent. The higher level abstractions provided by these languages, enable users to seamlessly translate ideas into design and structure data and code effectively. However these structures have to be efficiently translated, to generate code that can optimally exploit the target architecture. The translation pass normally generates code, that is sub optimal from an execution perspective. Subsequent passes are needed to clean up generated code, that is optimal or near optimal in running time. Generated code can be optimized by Transformation, which involves changing or removing inefficient code. Parallelization is another optimization technique, that involves finding threads of execution, which can be run concurrently on multiple processors to improve the running time. The topic of code optimization and parallelization is quite vast and replete with complex problems and interesting solutions. Hence it becomes necessary to classify the various available techniques, to reduce the complexity and to get a grasp of the subject domain. However our search for good survey papers in the subject area, did not yield interesting outcomes. This work is an attempt to fill this void and help scholars in the field, by providing a comprehensive survey and taxonomy of the various optimization and parallelization methods and the models used to generate solutions.
Sesha Kalyur, G. S. Nagaraja

Time Bound Robot Mission Planning for Priority Machine Using Linear Temporal Logic for Multi Goals

In this paper, we implement a Linear Temporal Logic-based motion planning algorithm for a prioritized mission scenario. The classic robot motion planning solves the problem of moving a robot from a source to a goal configuration while avoiding obstacles. This problem of motion planning gets complicated when the robot is asked to solve a complex goal specification incorporating boolean and temporal constraints between the atomic goals. This problem is referred to as the mission planning. The paper assumes that the mission to be solved is a collection of smaller tasks, wherein each task constituting the mission must be finished within a given amount of time. We assign the priorities for the tasks such that, the higher priority tasks should be completed beforehand. The planner solves the missions in multiple groups, instead of the classic approach of solving all the tasks at once. The group is dynamic and is a function of how many tasks can be incorporated such that no time deadline is lost. The grouping based prioritized and time-based planning saves a significant amount of time as compared to the inclusion of time information in the verification engine that complicates the search logic. NuSMV tool is used to verify the logic. Comparisons are made by solving all tasks at once and solving the tasks one-by-one. Experimental results reveal that the proposed solver is able to meet the deadlines of nearly all tasks while taking a small computation time.
Venkata Beri, Rahul Kala, Gora Chand Nandi


Weitere Informationen

Premium Partner