Skip to main content

2013 | Buch

Models, Algorithms, and Technologies for Network Analysis

Proceedings of the Second International Conference on Network Analysis

herausgegeben von: Boris I. Goldengorin, Valery A. Kalyagin, Panos M. Pardalos

Verlag: Springer New York

Buchreihe : Springer Proceedings in Mathematics & Statistics

insite
SUCHEN

Über dieses Buch

This volume contains two types of papers—a selection of contributions from the “Second International Conference in Network Analysis” held in Nizhny Novgorod on May 7–9, 2012, and papers submitted to an "open call for papers" reflecting the activities of LATNA at the Higher School for Economics.

This volume contains many new results in modeling and powerful algorithmic solutions applied to problems in

• vehicle routing
• single machine scheduling
• modern financial markets
• cell formation in group technology
• brain activities of left- and right-handers
• speeding up algorithms for the maximum clique problem
• analysis and applications of different measures in clustering

The broad range of applications that can be described and analyzed by means of a network brings together researchers, practitioners, and other scientific communities from numerous fields such as Operations Research, Computer Science, Transportation, Energy, Social Sciences, and more. The contributions not only come from different fields, but also cover a broad range of topics relevant to the theory and practice of network analysis. Researchers, students, and engineers from various disciplines will benefit from the state-of-the-art in models, algorithms, technologies, and techniques presented.

Inhaltsverzeichnis

Frontmatter
Tolerance-Based vs. Cost-Based Branching for the Asymmetric Capacitated Vehicle Routing Problem
Abstract
In this chapter, we consider the asymmetric capacitated vehicle routing problem (ACVRP). We compare the search tree size and computational time for the bottleneck tolerance-based and cost-based branching rules within a branch-and-bound algorithm on the FTV benchmark instances. Our computational experiments show that the tolerance-based branching rule reduces the search tree size by 45 times and the CPU time by 2.8 times in average.
Mikhail Batsyn, Boris Goldengorin, Anton Kocheturov, Panos M. Pardalos
Lower and Upper Bounds for the Preemptive Single Machine Scheduling Problem with Equal Processing Times
Abstract
The preemptive single machine scheduling problem of minimizing the total weighted completion time with equal processing times and arbitrary release dates is one of the four single machine scheduling problems with an open computational complexity status. In this chapter we present lower and upper bounds for the exact solution of this problem based on the assignment problem. We also investigate properties of these bounds and worst-case behavior.
Mikhail Batsyn, Boris Goldengorin, Pavel Sukhov, Panos M. Pardalos
Comparative Analysis of Two Similarity Measures for the Market Graph Construction
Abstract
Market graph is built on the basis of some similarity measure for financial asset returns. The chapter considers two similarity measures: classic Pearson correlation and sign correlation. We study the associated market graphs and compare the conditional risk of the market graph construction for these two measures of similarity. Our main finding is that the conditional risk for the sign correlation is much better than for the Pearson correlation for larger values of threshold for several probabilistic models. In addition, we show that for some model the conditional risk for sign correlation dominates over the conditional risk for Pearson correlation for all values of threshold. These properties make sign correlation a more appropriate measure for the maximum clique analysis.
Grigory A. Bautin, Valery A. Kalyagin, Alexander P. Koldanov
Heuristic Algorithm for the Cell Formation Problem
Abstract
In this chapter, we introduce a new heuristic for Cell Formation Problem in its most general formulation with grouping efficiency as an objective function. Suggested approach applies an improvement procedure to obtain solutions with high grouping efficiency. This procedure is repeated until efficiency can be increased for randomly generated configurations of cells. We consider our preliminary results for 10 popular benchmark instances taken from the literature. Also source instances with the solutions we got can be found in the Appendix.
Ilya Bychkov, Mikhail Batsyn, Pavel Sukhov, Panos M. Pardalos
Efficiency Analysis of Branch Network
Abstract
The problem of comparison of several branches efficiency is formulated as a multiple decision problem. The main difficulty to handle this problem lies in the compatibility condition. Solution of this difficulty, based on a method of combination of testing compatible generating hypotheses is given. The additivity condition of the loss function is investigated. This condition is used to construct the optimal multiple decision statistical procedures for comparison of network branches efficiency. Some examples are given.
Petr A. Koldanov
EEG Coherence in Right- and Left-Handers in Passive Visual Perception of Lines with Different Slope Angles
Abstract
The level of EEG coherence in right- and left-handers during passive viewing of white screen and lines with different slope angles has been studied. The right-handers had mostly local intra-hemispheric changes of EEG coherence, while these changes in the left-handers were inter-hemispheric.
Maxim Viktorovich Lukoyanov
Speeding up MCS Algorithm for the Maximum Clique Problem with ILS Heuristic and Other Enhancements
Abstract
In this chapter, we present our enhancements of one of the most efficient exact algorithms for the maximum clique problem—MCS algorithm by Tomita, Sutani, Higashi, Takahashi and Wakatsuki (in Proceedings of WALCOM’10, 2010, pp. 191–203). Our enhancements include: applying ILS heuristic by Andrade, Resende and Werneck (in Heuristics 18:525–547, 2012) to find a high-quality initial solution, fast detection of clique vertices in a set of candidates, better initial coloring, and avoiding dynamic memory allocation. A good initial solution considerably reduces the search tree size due to early pruning of branches related to small cliques. Fast detecting of clique vertices is based on coloring. Whenever a set of candidates contains a vertex adjacent to all candidates, we detect it immediately by its color and add it to the current clique avoiding unnecessary branching. Though dynamic memory allocation allows to minimize memory consumption of the program, it increases the total running time. Our computational experiments show that for dense graphs with a moderate number of vertices (like the majority of DIMACS graphs) it is more efficient to store vertices of a set of candidates and their colors on stack rather than in dynamic memory on all levels of recursion. Our algorithm solves p_hat1000-3 benchmark instance which cannot be solved by the original MCS algorithm. We got speedups of 7, 3000, and 13000 times for gen400_p0.9_55, gen400_p0.9_65, and gen400_p0.9_75 instances, correspondingly.
Evgeny Maslov, Mikhail Batsyn, Panos M. Pardalos
Summary and Semi-average Similarity Criteria for Individual Clusters
Abstract
There exists much prejudice against the within-cluster summary similarity criterion which supposedly leads to collecting all the entities in one cluster. This is not so if the similarity matrix is preprocessed by subtraction of “noise”, of which two ways, the uniform and modularity, are analyzed in the chapter. Another criterion under consideration is the semi-average within-cluster similarity, which manifests more versatile properties. In fact, both types of criteria emerge in relation to the least-squares data approximation approach to clustering, as shown in the chapter. A very simple local optimization algorithm, Add-and-Remove(S), leads to a suboptimal cluster satisfying some tightness conditions. Three versions of an iterative extraction approach are considered, leading to a portrayal of the cluster structure of the data. Of these, probably most promising is what is referred to as the injunctive clustering approach. Applications are considered to the analysis of semantics, to integrating different knowledge aspects and consensus clustering.
Boris Mirkin
Kernel Principal Component Analysis: Applications, Implementation and Comparison
Abstract
Kernel Principal Component Analysis (KPCA) is a dimension reduction method that is closely related to Principal Component Analysis (PCA). This report gives an overview of kernel PCA and presents an implementation of the method in MATLAB. The implemented method is tested in a transductive setting on two data bases. Two methods for labeling data points are considered, the nearest neighbor method and kernel regression, together with some possible improvements of the methods.
Daniel Olsson, Pando Georgiev, Panos M. Pardalos
Distance-Based Clique Relaxations in Networks: s-Clique and s-Club
Abstract
The concept of the clique, originally introduced as a model of a cohesive subgroup in the context of social network analysis, is a classical model of a cluster in networks. However, the ideal cohesiveness properties guaranteed by the clique definition put limitations on its applicability to situations where enforcing such properties is unnecessary or even prohibitive. Motivated by practical applications of diverse origins, numerous clique relaxation models, which are obtained by relaxing certain properties of a clique, have been introduced and studied by researchers representing different fields. Distance-based clique relaxations, which replace the requirement on pairwise distances to be equal to 1 in a clique with less restrictive distance bounds, are among the most important such models. This chapter surveys the up-to-date progress made in studying two common distance-based clique relaxation models called s-clique and s-club, as well as the corresponding optimization problems.
Shahram Shahinpour, Sergiy Butenko
GRASP with Path-Relinking for Facility Layout
Abstract
This paper proposes a mathematical formulation for the facility layout problem (FLP) based on the generalized quadratic assignment problem (GQAP). The GQAP is a generalization of the NP-hard quadratic assignment problem (QAP) that allows multiple facilities to be assigned to a single location as long as the capacity of the location allows. As a case study, we adapt the GRASP with path-relinking (GRASP-PR) heuristic introduced in Mateus et al. (J. Heuristics 17:527–565, 2011) for the hospital layout problem (HLP). In the HLP, we are given a set of areas in a hospital where medical facilities, such as surgery and recovery rooms, can be located and a set of medical facilities, each facility with a required area, that must be located in the hospital. Furthermore, we are given a matrix specifying, for each ordered pair of facilities, the number of patients that transition from the first to the second facility. The objective of the assignment is to minimize the total distance traveled by the patients. We illustrate our algorithm with a numerical example.
R. M. A. Silva, M. G. C. Resende, P. M. Pardalos, G. R. Mateus, G. De Tomi
Comparative Analysis of the BRIC Countries Stock Markets Using Network Approach
Abstract
The paper presents the analysis of the network model referred to as market graph of the BRIC countries stock markets. We construct the stock market graph as follows: each vertex represents a stock, and the vertices are adjacent if the price correlation coefficient between them over a certain period of time is greater than or equal to specified threshold. The market graphs are constructed for different time periods to understand the dynamics of their characteristics such as correlation distribution histogram, mean value and standard deviation, size and structure of the maximum cliques. Our results show that we can split the BRIC countries into two groups. Brazil, Russia and India constitute the first group, China constitutes the second group.
Arsenii Vizgunov, Andrey Glotov, Panos M. Pardalos
Sensor Cover and Double Partition
Abstract
The minimum sensor cover is an important optimization problem in wireless sensor networks, which can be seen as a special case of the minimum set cover problem. The minimum set cover problem is a well-known problem in combinatorial optimization, which has no polynomial-time (ρlnδ)-approximation for 0<ρ<1 unless NPDTIME(n O(loglogn)) where δ is the maximum cardinality of a subset in the input collection. However, the minimum sensor cover problem has polynomial-time constant-approximations because this special case has a geometric structure. The design technique, called partition, is employed to take the advantage of this geometric structure. Especially, the double partition plays an important role. In this article, we would like to give an exploratory essay for the double partition technique together with research progress on approximations for the minimum sensor cover problem.
Lidong Wu, Weili Wu, Zaixin Lu, Yuqing Zhu, Ding-Zhu Du
Metadaten
Titel
Models, Algorithms, and Technologies for Network Analysis
herausgegeben von
Boris I. Goldengorin
Valery A. Kalyagin
Panos M. Pardalos
Copyright-Jahr
2013
Verlag
Springer New York
Electronic ISBN
978-1-4614-8588-9
Print ISBN
978-1-4614-8587-2
DOI
https://doi.org/10.1007/978-1-4614-8588-9

Premium Partner