Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the First International Conference on Applied Algorithms, ICAA 2014, held in Kolkata, India, in January 2014. ICAA is a new conference series with a mission to provide a quality forum for researchers working in applied algorithms. Papers presenting original contributions related to the design, analysis, implementation and experimental evaluation of efficient algorithms and data structures for problems with relevant real-world applications were sought, ideally bridging the gap between academia and industry. The 21 revised full papers presented together with 7 short papers were carefully reviewed and selected from 122 submissions.



Invited Talks

Algorithmic Challenges in Digital Microfluidic Biochips: Protocols, Design, and Test

Recent emergence of microfluidic technology has imparted a profound impact on the implementation of miniaturized healthcare chips and systems. In this review article, we will elaborate on several algorithmic challenges that arise while realizing biochemical protocols on a digital microfluidic (DMF) lab-on-a-chip. In particular, we will focus on certain design automation issues of sample preparation, dilution gradient generation, layout planning, and testing of DMF biochips.
Bhargab B. Bhattacharya, Sudip Roy, Sukanta Bhattacharjee

Monitoring Distributed, Heterogeneous Data Streams: The Emergence of Safe Zones

In many emerging applications, the data to be monitored is of very high volume, dynamic, and distributed, making it infeasible to collect the distinct data streams to a central node and process them there. Often, the monitoring problem consists of determining whether the value of a global function, which depends on the union of all streams, crossed a certain threshold. A great deal of effort is directed at reducing communication overhead by transforming the monitoring of the global function to the testing of local constraints, checked independently at the nodes. Recently, geometric monitoring (GM) proved to be very useful for constructing such local constraints for general (non-linear, non-monotonic) functions. Alas, in all current variants of geometric monitoring, the constraints at all nodes share an identical structure and are, thus, unsuitable for handling heterogeneous streams, which obey different distributions at the distinct nodes. To remedy this, we propose a general approach for geometric monitoring of heterogeneous streams (HGM), which defines constraints tailored to fit the distinct data distributions at the nodes. While optimally selecting the constraints is an NP-hard problem, we provide a practical solution, which seeks to reduce running time by hierarchically clustering nodes with similar data distributions and then solving more, but simpler, optimization problems. Experiments are provided to support the validity of the proposed approach.
Daniel Keren, Guy Sagy, Amir Abboud, David Ben-David, Assaf Schuster, Izchak Sharfman, Antonios Deligiannakis

Exploiting Heterogeneous Data Sources: A Computing Paradigm for Live Web and Sustainability Applications

Today we are witnessing advances in technology that are changing dramatically the way we live, work and interact with the physical environment. New revolutionary technologies are creating an explosion on the size and variety of information that is becoming available. Such technologies include the development and widespread adoption of networks of small and inexpensive embedded sensors that are being used to instrument the environment at an unprecedented scale. In addition, the last few years have brought forward the widespread adoption of social networking applications. Another trend with significant ramifications is the massive adoption of smartphones in the market. The rise of the social networking applications and the always-on functionality of the smartphones are driving the rise of a part of the web that is dedicated to recording, maintaining and sharing rapidly changing data which has been termed the Live Web. In this talk we present recent research work motivated by the trends we describe above. We also consider how such novel research results are enabling forms of computation. First, we focus on the specific problem of finding events or trends, including spatiotemporal patterns, when monitoring microblogging streams. Our work is mainly in the context of the INSIGHT FP7 project and we also consider data from sources as different as traffic sensors and Twitter streams. To put this research work in a general context, in the second part of the talk we consider the more general problem of developing applications and reasoning about the behavior of novel applications that exploit the new setting of the Live Web, and understanding the implications on the design, development and deployment of new applications in this setting. We describe initial work on the formulation of a new computing paradigm for this setting, and on describing how it can be applied for computational sustainability applications.
Dimitrios Gunopulos

Contributed Talks

Constructing an n-dimensional Cell Complex from a Soup of (n − 1)-Dimensional Faces

There is substantial value in the use of higher-dimensional (>3D) digital objects in GIS that are built from complex real-world data. This use is however hampered by the difficulty of constructing such objects. In this paper, we present a dimension independent algorithm to build an n-dimensional cellular complex with linear geometries from its isolated (n − 1)-dimensional faces represented as combinatorial maps. It does so by efficiently finding the common (n − 2)-cells (ridges) along which they need to be linked. This process can then be iteratively applied in increasing dimension to construct objects of any dimension. We briefly describe combinatorial maps, present our algorithm using them as a base, and show an example using 2D, 3D and 4D objects which was verified to be correct, both manually and using automated methods.
Ken Arroyo Ohori, Guillaume Damiand, Hugo Ledoux

A Digital-Geometric Algorithm for Generating a Complete Spherical Surface in ℤ3

We show that the construction of a digital sphere by circularly sweeping a digital semicircle (generatrix) around its diameter results in appearance of some holes (absentee voxels) in its spherical surface of revolution. This incompleteness calls for a proper characterization of the absentee voxels whose restoration in the surface of revolution can ensure the required completeness. In this paper, we present a characterization of the absentee voxels using certain techniques of digital geometry and show that their count varies quadratically with the radius of the semicircular generatrix. Next, we design an algorithm to fill up the absentee voxels so as to generate a spherical surface of revolution, which is complete and realistic from the viewpoint of visual perception. Test results have also been furnished to substantiate our theoretical findings. The proposed technique will find many potential applications in computer graphics and 3D imaging.
Sahadev Bera, Partha Bhowmick, Bhargab B. Bhattacharya

Bar 1-Visibility Drawings of 1-Planar Graphs

A bar 1-visibility drawing of a graph G is a drawing of G where each vertex is drawn as a horizontal line segment called a bar, each edge is drawn as a vertical line segment between its incident vertices such that each edge crosses at most one bar. A graph G is bar 1-visible if G has a bar 1-visibility drawing. A graph G is 1-planar if G can be drawn in the plane such that each edge has at most one crossing. In this paper we give O(n) time algorithms to find bar 1-visibility drawings of diagonal grid graphs, maximal outer 1-planar graphs, recursive quadrangle 1-planar graphs and pseudo double wheel 1-planar graphs, where n is the number of vertices in the input graph.
Shaheena Sultana, Md. Saidur Rahman, Arpita Roy, Suraiya Tairin

Search Strategies for Subgraph Isomorphism Algorithms

Searching for subgraph isomorphisms is an essential problem in pattern matching. Most of the algorithms use a branch-and-bound method to sequentially assign pattern nodes to compatible nodes in the target graph. It is well known that the order in which nodes are assigned, a so-called search strategy, influences drastically the size of the search space. In this article we investigate the impact of various search strategies on the efficiency of two algorithms, the first being the Ullmann’s algorithm and the second one the recently proposed improvement of Ullmann’s algorithm. From the large set of proposed orders we find the most successful ones by thorough testing on a large database of graphs.
Uroš Čibej, Jurij Mihelič

Analysis of Concentration Errors in Sample Dilution Algorithms on a Digital Microfluidic Biochip

Sample preparation is an important step in any biochemical protocol, and several algorithms are known for automating them on a digital microfluidic (DMF) lab-on-a-chip (LoC). On-chip dilution of a sample on a DMF biochip involves several mix-split steps, which often suffer from the inaccuracies caused by unbalanced splitting of micro-fluid droplets. Also, error minimization is essential because of the limited availability of the stock solutions and costly reagents. In this work, we analyze the performance of two dilution algorithms Min-Mix (Thies et al., 2008) and DMRW (Roy et al., 2010) in the presence of volumetric errors that may occur during the splitting process. Our analysis exposes many interesting error behaviors and indicates possible solutions to correct them in a cyber-physical microfluidic platform.
Nilina Bera, Subhashis Majumder, Bhargab B. Bhattacharya

Choosing and Working of an Anonymous Leader

We revisit the problem of anonymous communication where nodes communicate without revealing their identity. We propose its use in the choosing of a node as an anonymous leader and subsequent communication to and from this leader such that its identity is not revealed. We give indistinguishability definitions for anonymous leader and prove it to be secure in an arbitrary reliable network.
M. R. Rajeevalochana, Kannan Srinathan

An Efficient ID Based Security Algorithm for Mutual Node Authentication and Key Management: An Elliptic Curve Cryptography Based Approach

The biggest challenge in the field of networking is the secured communication between the nodes within the network system; therefore it has become one of the major and attractive fields of research in computer network. The importance of Elliptic Curve Cryptography (ECC) in this field is increasing rapidly because of its smaller key size compare to other known public-key algorithms. Several user authentication and key management algorithms or schemes have been developed so far but most of them contain complex computations. Not only that, most of the schemes also rely on remote server for node authentication as well as node communication. In this paper, an ECC based algorithm has been proposed to overcome these kinds of complexities. The proposed work will do mutual authentication and session key management for node communication efficiently without putting extra burden on any server but with lesser computation time.
Nilanjan Sen

Automata for Modeling the Distributed Probabilistic Reversible Processes

This paper presents the construction of an automaton termed the Concurrent Probabilistic Reversible Automata (CPRA). It models the distributed systems which exhibit probabilistic behaviour and also relies on backtracking as the basis to make a system fault tolerant. Here, the basic concepts of non-probabilistic automata have been extended by introducing the discrete probabilities on the set of transitions for modelling the probabilistic nature of computing environment. In addition, concurrency has been implemented by defining parallel joint(\(\mbox{\textasciicircum}\)) operator dealing with the odds of shared transitions. A memory structure has also been defined to keep track of past transitions in order to facilitate the backtracking without losing the initial computations, which may result in inconsistent states as well. The major contribution of this work can be seen in three aspects: first, defining an automaton to model full non-determinism along with probabilistic characterisation without making any distinctions between states and actions by which some of the previous formalism suffers; second, association of memory with the automata that preserves concurrency during backtracking; third, the implementation of the parallel joint operator to facilitate the concurrency. Although, the full probabilistic analysis of CPRA can be performed by applying the principles of measure theory on the traces of CPRA but it is reported as future work.
Arpit, Afza Shafie, Wan Fatimah Wan Ahmad

Finding Influential Nodes in Social Networks Using Minimum k-Hop Dominating Set

Challenges in social interaction networks are often modelled as graph theoretic problems. One such problem is to find a group of influential individuals of minimum size or the initial seed set in a social network, so that all the nodes in the network can be reached with only one hop from the seeds. This problem is equivalent to finding a minimum dominating set for the network. In this paper, we address a problem which is similar to finding minimum dominating set but differs in terms of number of hops needed to reach all the nodes. We have generalized the problem as k-hop dominating set problem, where a maximum of k hops will be allowed to spread the information among all the nodes of the graph. We show that the decision version of the k-hop dominating set problem is NP-complete. Results show that, in order to reach the same percentage of nodes in the network, if one extra hop is allowed then the cardinality of the seed set i.e. the number of influential nodes needed, is considerably reduced. Also, the experimental results show that the influential nodes can be characterized by their high betweenness values.
Partha Basuchowdhuri, Subhashis Majumder

Efficient Heuristics for the Time Dependent Team Orienteering Problem with Time Windows

The Time Dependent Team Orienteering Problem with Time Windows (TDTOPTW) can be used to model several real life problems. Among them, the route planning problem for tourists interested in visiting multiple points of interest (POIs) using public transport. The main objective of this problem is to select POIs that match tourist preferences, while taking into account a multitude of parameters and constraints and respecting the time available for sightseeing in a daily basis. TDTOPTW is NP-hard while almost the whole body of the related literature addresses the non time dependent version of the problem. The only TDTOPTW heuristic proposed so far is based on the assumption of periodic service schedules. Herein, we propose two efficient cluster-based heuristics for the TDTOPTW which yield high quality solutions, take into account time dependency in calculating travel times between POIs and make no assumption on periodic service schedules. The validation scenario for our prototyped algorithms included the metropolitan transit network and real POI sets compiled from Athens (Greece).
Damianos Gavalas, Charalampos Konstantopoulos, Konstantinos Mastakas, Grammati Pantziou, Nikolaos Vathis

Color Texture Image Segmentation Based on Neutrosophic Set and Nonsubsampled Contourlet Transformation

In this paper, an automatic approach for image segmentation based on neutrosophic set and nonsubsampled contourlet transformation for natural images is proposed. This method uses both color and texture features for segmentation. Input image is transformed into LUV color model for extracting the color features. Texture features are extracted from the grayscale image. Image is then transformed into Neutrosophic domain. Finally, image segmentation is performed using Fuzzy C-means clustering. Clusters are adaptively calculated based on a cluster validity analysis. This method is tested in natural image database. The result analysis shows that the proposed method automatically segments image better than traditional methods.
Jeethu Mary Mathew, Philomina Simon

An Experimental Analysis of Vertex Coloring Algorithms on Sparse Random Graphs

The DSATUR algorithm for vertex coloring is popular both in its heuristic and exact (branch-and-bound) forms. Common to the known public implementations of the exact algorithm is the use of adjacency matrices to store the adjacency relations; this influences the algorithm’s implementation and its running time. In this paper we investigate the benefits of the introduction of supporting data structures to improve its running time: in addition to replacing the adjacency matrix by adjacency lists, thus shifting the focus from vertices to edges, we also introduce a priority queue data structure to assist in vertex selection. Our goal is to explore under which circumstances additional supporting data structures can speed up (exact) DSATUR.
Patrick Healy, Andrew Ju

An Experimental Study of a Novel Move-to-Front-or-Middle (MFM) List Update Algorithm

List Update Problem (LUP) or List Accessing Problem (LAP) is a well studied research problem in the area of online algorithms [5] and self organizing data structures [2] since the pioneering work of McCabe [7]. In this problem, the inputs are an unsorted list of distinct items and a sequence of requests where each request is an access operation on an item of the list. The objective of a list update algorithm is to reorganize the list after each access and minimize the total access and reorganization cost, while processing a request sequence of finite size on a fixed size list. LUP is one of the general memory accessing problem which was studied by Sleator and Tarjan [14] for the competitive analysis of online algorithms in their seminal paper. As offline list update has been proved to be NP-hard [3], there is no known trivial solution to the problem. Move-To-Front(MTF) has been proved to be the best online algorithm [2] in the literature. In this paper, we have proposed a novel variant of MTF algorithm, which we popularly call as Move-to-Front-or-Middle(MFM). We have performed an empirical study of MFM algorithm and comparative performance analysis with MTF algorithm using two dataset such as Calgary Corpus and Canterbury Corpus. Our experimental results show that MFM outperforms MTF for all request sequences in both the data set.
Rakesh Mohanty, Tirtharaj Dash, Biswadeep Khan, Shiba Prasad Dash

Too Long-Didn’t Read: A Practical Web Based Approach towards Text Summarization

In today’s digital epoch, people share and read a motley of never ending electronic information, thus either a lot of time is wasted in deciphering all this information, or only a tiny amount of it is actually read. Therefore, it is imperative to contrive a generic text summarization technique. In this paper, we propose a web based and domain independent automatic text summarization method. The method focuses on generating an arbitrary length summary by extracting and assigning scores to semantically important information from the document, by analyzing term frequencies and tagging certain parts of speech like proper nouns and signal words. Another important characteristic of our approach is that it also takes font semantics of the text (like headings and emphasized texts) into consideration while scoring different entities of the document.
Arjun Datt Sharma, Shaleen Deep

A Comparative Study of Tag SNP Selection Using Clustering

The immense volume and rapid growth of human genomic data, especially single nucleotide polymorphisms (SNPs), present special challenges for both biomedical researchers and automatic algorithms. SNPs are confirmed as a major factor in human genome polymorphisms, and are found to be suitable as a genetic marker for disease characteristics. SNPs hold much promise as a basis for genome-wide disease-gene association. Determining the relationship between disease complexity and SNPs requires complex genotyping for large SNP data sets, and is thus very expensive and labor-intensive. In this paper, we attempt two novel approaches to solve the problem of tag SNP selection, one using self-organizing maps (SOM) for clustering the SNPs and the other using Fuzzy C Means clustering. Both the above methods have been shown to select a more optimal set of tag SNPs which capture the remaining SNPs more efficiently as compared to Haploview Tagger, thus satisfying the goal of tag SNP selection in a more suitable way.
Sujay Saha, Riddhiman Dasgupta, Anirban Ghose, Koustav Mullick, Kashi Nath Dey

Application of Spectral Unmixing Algorithm on Hyperspectral Data for Mangrove Species Classification

This study makes use of non-linear unmixing model to unmix pixels of hyperspectral imagery in a heterogenous mangrove forest. This model takes into account the multi-path effects of radiation between endmember spectra that may occur before final interception by the sensor. Non linear models represent naturally occurring situations more accurately such as that commonly found within the mangrove forests where a variety of species co-exist as a mixed stand. This paper analyses the classification accuracy of linear and non-linear unmixing models for discrimination of mangrove species in the Sunderban Delta, India. On analysis, it has been found that linear unmixing has successfully identified mangrove species which exist as a pure patch whereas the non-linear model has been able to discriminate between species more accurately in a heterogenous patch. 10 dominant mangrove species have been identified in the study area and the results validated through field visits and RMSE values.
Somdatta Chakravortty, Ekta Shah, Arpita Saha Chowdhury

Automatic Extraction of Headlines from Punjabi Newspapers

For any language in the world, headlines of newspapers are always important and by reading headlines we can have idea of whole news without completely reading the news articles. Moreover there are many websites whose task is to extract the news headlines from online newspapers and display those headlines on their websites for information to their users. One other important application of headlines extraction is in text summarization where headline-sentences are given more importance than other sentences for including in final summary. This paper concentrates on automatic headlines extraction from Punjabi newspapers. Punjabi is the official language for state of Punjab. But Punjabi is under resource language. There are very less number of computational-linguistic resources available for Punjabi. But a lot of research is going on for developing NLP applications in Punjabi language. It is first time that automatic headlines extraction from Punjabi newspapers has been developed with four features of headlines: 1) Punctuation mark feature 2) Font feature 3) Number of words feature and 4) Title keywords feature. Weights of these four features are calculated by applying mathematical regression as machine learning approach. For extracting headlines, final scores of sentences are obtained using feature weight equation as: w 1 f 1 + w 2 f 2 + w 3 f 3 + w 4 f 4 where f 1, f 2, f 3 and f 4 are feature-scores of four features and w 1, w 2, w 3 and w 4 are learned weights of these features. The accuracy of Punjabi headline extraction system is 98.39% which is tested over fifty Punjabi single/multi news documents. A part of Punjabi headlines extraction system with Punctuation mark feature has been integrated with Punjabi Text Summarization system which is available online.
Vishal Gupta

A Similarity Measure for Clustering Gene Expression Data

A similarity measure for gene expression data should give the shapes of the patterns of the gene expression data and should be less susceptible to outliers. In this paper, we present a similarity measure for clustering gene expression time series data. Our similarity measure, PWCTM, uses the pairwise changing tendency measure of every pair of conditions. We have compared our measure with several proximity measures using k-means clustering algorithm in terms of Silhouette index, z-score and p-value. Our experimental results indicate that the gene clusters obtained with PWCTM as the similarity measure are biologically significant in the respective clusters due to their low p-values and high z-values.
Ram Charan Baishya, Rosy Sarmah, Dhruba Kumar Bhattacharyya, Malay Ananda Dutta

A Huffman Code Based Image Steganography Technique

We present here a novel steganographic method based on Huffman coding and the least significant bit substitution in order to provide high embedding capacity, a strong security and imperceptible visual quality to secret message. Every eight bits of the secret image are first encoded by building a Huffman tree. After that those encoded bits of secret image are divided into 4 groups. Each part has a decimal value between 0 to 3. These decimal values determine the location where to embed the message in a particular pixel of cover image. To embed the message we just put a one in the corresponding location in a pixel of the cover image which identified by the decimal values of the secret image. Since Huffman Table reduces the size of the original image, an attacker cannot easily recover from the stego image those fine details of the original image that would enable him to mount a reliable attack. We have got comparable visual quality as the Peak Signal to Noise Ratio values lie between 30 dB to 31 dB.
Amitava Nag, Jyoti Prakash Singh, Sushanta Biswas, Debasree Sarkar, Partha Pratim Sarkar

Expression-Invariant 3D Face Recognition Using K-SVD Method

This paper proposes a method to perform expression invariant face recognition using dictionary learning approach. The proposed method performs the operation in the following stages: the T-region extraction from the face to get the facial region having minimum variation with expression, determination of the wavelet coefficients of the extracted region, dictionary learning using K-SVD and matching. The experiment has been performed on a database that contains 40 persons with 9 expressions each under different illumination conditions. The recognition performed has shown a good accuracy rate as compared to the mostly used PCA-SVM approach. Our system uses label-consistent K-SVD algorithm for dictionary learning to learn a set of dictionaries that represents 3D information of the face. This method fulfills the purpose of sparse coding and classification.
Somsukla Maiti, Dhiraj Sangwan, Jagdish Lal Raheja

An Efficient Face Recognition Method by Fusing Spatial Discriminant Facial Features

Feature level fusion is a very well known technique for improving the performance of a face recognition system. This paper presents an approach of fusion of directional spatial discriminant features for face recognition. The key idea of the proposed method is to fuse the facial features lying along the horizontal, vertical and diagonal directions, so that this fused feature vector can contain more discriminant information than the individual facial feature of single direction only. However due to the fusion of features the size of fused feature vector becomes larger, which may increase complexity of the classifier to be used for recognition. To optimize this lower dimensional discriminant features are again extracted from this large fused feature vector. In our experiment we have applied G-2DFLD method on the original images to extract the discriminant features. Then original images are converted into diagonal images and another set of discriminant features, representing the diagonal information, are extracted by using the G-2DFLD method. The original and diagonal feature matrices are then fused to form a large feature matrix. The dimension of this large fused matrix is then further reduced by G-2DFLD method and this resultant matrix is used for classification and recognition by Radial Basis Function-Neural Networks (RBF-NN). Experiments on the AT&T (formally known as ORL database) face database indicate the competitive performance of the proposed method, as compared to some existing subspaces-based methods.
Aniruddha Dey, Shiladitya Chowdhury, Jamuna Kanta Sing, Dipak Kumar Basu, Mita Nasipuri


Weitere Informationen

Premium Partner