Skip to main content

2017 | Buch

Convexity and Concentration

herausgegeben von: Eric Carlen, Mokshay Madiman, Elisabeth M. Werner

Verlag: Springer New York

Buchreihe : The IMA Volumes in Mathematics and its Applications

insite
SUCHEN

Über dieses Buch

This volume presents some of the research topics discussed at the 2014-2015 Annual Thematic Program Discrete Structures: Analysis and Applications at the Institute of Mathematics and its Applications during the Spring 2015 where geometric analysis, convex geometry and concentration phenomena were the focus.

Leading experts have written surveys of research problems, making state of the art results more conveniently and widely available. The volume is organized into two parts. Part I contains those contributions that focus primarily on problems motivated by probability theory, while Part II contains those contributions that focus primarily on problems motivated by convex geometry and geometric analysis.

This book will be of use to those who research convex geometry, geometric analysis and probability directly or apply such methods in other fields.

Inhaltsverzeichnis

Frontmatter

Probability and Concentration

Frontmatter
Interpolation of Probability Measures on Graphs
Abstract
These notes are a review of the author’s works about interpolation of probability measures on graphs via optimal transportation methods. We give more detailed proofs and constructions in the particular case of an interpolation between two finitely supported probability measures on \(\mathbb{Z}\), with a stochastic domination assumption. We also present other types of interpolations, in particular Léonard’s entropic interpolations and discuss the relationships between these constructions.
Erwan Hillion
Entropy and Thinning of Discrete Random Variables
Abstract
We describe five types of results concerning information and concentration of discrete random variables, and relationships between them, motivated by their counterparts in the continuous case. The results we consider are information theoretic approaches to Poisson approximation, the maximum entropy property of the Poisson distribution, discrete concentration (Poincaré and logarithmic Sobolev) inequalities, monotonicity of entropy and concavity of entropy in the Shepp–Olkin regime.
Oliver Johnson
Concentration of Measure Principle and Entropy-Inequalities
Abstract
The concentration measure principle is presented in an abstract way to encompass and unify different concentration properties. We give a general overview of the links between concentration properties, transport-entropy inequalities, and logarithmic Sobolev inequalities for some specific transport costs. By giving few examples, we emphasize optimal weak transport costs as an efficient tool to establish new transport inequality and new concentration principles for discrete measures (the binomial law, the Poisson measure, the uniform law on the symmetric group).
Paul-Marie Samson
Structured Random Matrices
Abstract
Random matrix theory is a well-developed area of probability theory that has numerous connections with other areas of mathematics and its applications. Much of the literature in this area is concerned with matrices that possess many exact or approximate symmetries, such as matrices with i.i.d. entries, for which precise analytic results and limit theorems are available. Much less well understood are matrices that are endowed with an arbitrary structure, such as sparse Wigner matrices or matrices whose entries possess a given variance pattern. The challenge in investigating such structured random matrices is to understand how the given structure of the matrix is reflected in its spectral properties. This chapter reviews a number of recent results, methods, and open problems in this direction, with a particular emphasis on sharp spectral norm inequalities for Gaussian random matrices.
Ramon van Handel
Rates of Convergence for Empirical Spectral Measures: A Soft Approach
Abstract
Understanding the limiting behavior of eigenvalues of random matrices is the central problem of random matrix theory. Classical limit results are known for many models, and there has been significant recent progress in obtaining more quantitative, non-asymptotic results. In this paper, we describe a systematic approach to bounding rates of convergence and proving tail inequalities for the empirical spectral measures of a wide variety of random matrix ensembles. We illustrate the approach by proving asymptotically almost sure rates of convergence of the empirical spectral measure in the following ensembles: Wigner matrices, Wishart matrices, Haar-distributed matrices from the compact classical groups, powers of Haar matrices, randomized sums and random compressions of Hermitian matrices, a random matrix model for the Hamiltonians of quantum spin glasses, and finally the complex Ginibre ensemble. Many of the results appeared previously and are being collected and described here as illustrations of the general method; however, some details (particularly in the Wigner and Wishart cases) are new.
Our approach makes use of techniques from probability in Banach spaces, in particular concentration of measure and bounds for suprema of stochastic processes, in combination with more classical tools from matrix analysis, approximation theory, and Fourier analysis. It is highly flexible, as evidenced by the broad list of examples. It is moreover based largely on “soft” methods, and involves little hard analysis.
Elizabeth S. Meckes, Mark W. Meckes
Concentration of Measure Without Independence: A Unified Approach Via the Martingale Method
Abstract
The concentration of measure phenomenon may be summarized as follows: a function of many weakly dependent random variables that is not too sensitive to any of its individual arguments will tend to take values very close to its expectation. This phenomenon is most completely understood when the arguments are mutually independent random variables, and there exist several powerful complementary methods for proving concentration inequalities, such as the martingale method, the entropy method, and the method of transportation inequalities. The setting of dependent arguments is much less well understood. This chapter focuses on the martingale method for deriving concentration inequalities without independence assumptions. In particular, we use the machinery of so-called Wasserstein matrices to show that the Azuma-Hoeffding concentration inequality for martingales with almost surely bounded differences, when applied in a sufficiently abstract setting, is powerful enough to recover and sharpen several known concentration results for nonproduct measures. Wasserstein matrices provide a natural formalism for capturing the interplay between the metric and the probabilistic structures, which is fundamental to the concentration phenomenon.
Aryeh Kontorovich, Maxim Raginsky
Strong Data-Processing Inequalities for Channels and Bayesian Networks
Abstract
The data-processing inequality, that is, I(U; Y ) ≤ I(U; X) for a Markov chain U → X → Y, has been the method of choice for proving impossibility (converse) results in information theory and many other disciplines. Various channel-dependent improvements (called strong data-processing inequalities, or SDPIs) of this inequality have been proposed both classically and more recently. In this note we first survey known results relating various notions of contraction for a single channel. Then we consider the basic extension: given SDPI for each constituent channel in a Bayesian network, how to produce an end-to-end SDPI?
Our approach is based on the (extract of the) Evans-Schulman method, which is demonstrated for three different kinds of SDPIs, namely, the usual Ahlswede-Gács type contraction coefficients (mutual information), Dobrushin’s contraction coefficients (total variation), and finally the F I -curve (the best possible non-linear SDPI for a given channel). Resulting bounds on the contraction coefficients are interpreted as probability of site percolation. As an example, we demonstrate how to obtain SDPI for an n-letter memoryless channel with feedback given an SDPI for n = 1.
Finally, we discuss a simple observation on the equivalence of a linear SDPI and comparison to an erasure channel (in the sense of “less noisy” order). This leads to a simple proof of a curious inequality of Samorodnitsky (2015), and sheds light on how information spreads in the subsets of inputs of a memoryless channel.
Yury Polyanskiy, Yihong Wu
An Application of a Functional Inequality to Quasi-Invariance in Infinite Dimensions
Abstract
One way to interpret smoothness of a measure in infinite dimensions is quasi-invariance of the measure under a class of transformations. Usually such settings lack a reference measure such as the Lebesgue or Haar measure, and therefore we cannot use smoothness of a density with respect to such a measure. We describe how a functional inequality can be used to prove quasi-invariance results in several settings. In particular, this gives a different proof of the classical Cameron-Martin (Girsanov) theorem for an abstract Wiener space. In addition, we revisit several more geometric examples, even though the main abstract result concerns quasi-invariance of a measure under a group action on a measure space.
Maria Gordina
Borell’s Formula on a Riemannian Manifold and Applications
Abstract
Borell’s formula is a stochastic variational formula for the log-Laplace transform of a function of a Gaussian vector. We establish an extension of this to the Riemannian setting and give a couple of applications, including a new proof of a convolution inequality on the sphere due to Carlen, Lieb and Loss.
Joseph Lehec
Fourth Moments and Products: Unified Estimates
Abstract
We provide a unified discussion, based on the properties of eigenfunctions of the generator of the Ornstein-Uhlenbeck semigroup, of quantitative fourth moment theorems and of the weak Gaussian product conjecture. In particular, our approach illustrates the connections between moment estimates for non-linear functionals of Gaussian fields, and the general semigroup approach towards fourth moment theorems, recently initiated by Ledoux and further investigated by Poly et al.
Ivan Nourdin, Giovanni Peccati
Asymptotic Expansions for Products of Characteristic Functions Under Moment Assumptions of Non-integer Orders
Abstract
This is mostly a review of results and proofs related to asymptotic expansions for characteristic functions of sums of independent random variables (known also as Edgeworth-type expansions). A number of known results is refined in terms of Lyapunov coefficients of non-integer orders.
Sergey G. Bobkov

Convexity and Concentration for Sets and Functions

Frontmatter
Non-standard Constructions in Convex Geometry: Geometric Means of Convex Bodies
Abstract
In this note we discuss new constructions of convex bodies. By thinking of the polarity map KK as the inversion xx −1 one may construct new bodies which were not previously considered in convex geometry. We illustrate this philosophy by describing a recent result of Molchanov, who constructed continued fractions of convex bodies.
Our main construction is the geometric mean of two convex bodies. We define it using the above ideology, and discuss its properties and its structure. We also compare our new definition with the “logarithmic mean” of Böröczky, Lutwak, Yang and Zhang, and discuss volume inequalities. Finally, we discuss possible extensions of the theory to p-additions and to the functional case, and present a list of open problems.
An appendix to this paper, written by Alexander Magazinov, presents a 2-dimensional counterexample to a natural conjecture involving the geometric mean.
Vitali Milman, Liran Rotem
Randomized Isoperimetric Inequalities
Abstract
We discuss isoperimetric inequalities for convex sets. These include the classical isoperimetric inequality and that of Brunn-Minkowski, Blaschke-Santaló, Busemann-Petty and their various extensions. We show that many such inequalities admit stronger randomized forms in the following sense: for natural families of associated random convex sets one has stochastic dominance for various functionals such as volume, surface area, mean width and others. By laws of large numbers, these randomized versions recover the classical inequalities. We give an overview of when such stochastic dominance arises and its applications in convex geometry and probability.
Grigoris Paouris, Peter Pivovarov
Forward and Reverse Entropy Power Inequalities in Convex Geometry
Abstract
The entropy power inequality, which plays a fundamental role in information theory and probability, may be seen as an analogue of the Brunn-Minkowski inequality. Motivated by this connection to Convex Geometry, we survey various recent developments on forward and reverse entropy power inequalities not just for the Shannon-Boltzmann entropy but also more generally for Rényi entropy. In the process, we discuss connections between the so-called functional (or integral) and probabilistic (or entropic) analogues of some classical inequalities in geometric functional analysis.
Mokshay Madiman, James Melbourne, Peng Xu
Log-Concave Functions
Abstract
We attempt to provide a description of the geometric theory of log-concave functions. We present the main aspects of this theory: operations between log-concave functions; duality; inequalities including the Prékopa-Leindler inequality and the functional form of Blaschke-Santaló inequality and its converse; functional versions of area measure and mixed volumes; valuations on log-concave functions.
Andrea Colesanti
On Some Problems Concerning Log-Concave Random Vectors
Abstract
A Radon measure μ on a locally convex linear space F is called logarithmically concave (log-concave in short) if for any compact nonempty sets K, L ⊂ F and λ ∈ [0, 1], μ(λ K + (1 −λ)L) ≥ μ(K) λ μ(L)1−λ . A random vector with values in F is called log-concave if its distribution is logarithmically concave.
Rafał Latała
Stability Results for Some Geometric Inequalities and Their Functional Versions
Abstract
The Blaschke Santaló inequality and the L p affine isoperimetric inequalities are major inequalities in convex geometry and they have a wide range of applications. Functional versions of the Blaschke Santaló inequality have been established over the years through many contributions. More recently and ongoing, such functional versions have been established for the L p affine isoperimetric inequalities as well. These functional versions involve notions from information theory, like entropy and divergence.We list stability versions for the geometric inequalities as well as for their functional counterparts. Both are known for the Blaschke Santaló inequality. Stability versions for the L p affine isoperimetric inequalities in the case of convex bodies have only been known in all dimensions for p = 1 and for p > 1 only for convex bodies in the plane. Here, we prove almost optimal stability results for the L p affine isoperimetric inequalities, for all p, for all convex bodies, for all dimensions. Moreover, we give stability versions for the corresponding functional versions of the L p affine isoperimetric inequalities, namely the reverse log Sobolev inequality, the L p affine isoperimetric inequalities for log concave functions, and certain divergence inequalities.
Umut Caglar, Elisabeth M. Werner
Measures of Sections of Convex Bodies
Abstract
This article is a survey of recent results on slicing inequalities for convex bodies. The focus is on the setting of arbitrary measures in place of volume.
Alexander Koldobsky
On Isoperimetric Functions of Probability Measures Having Log-Concave Densities with Respect to the Standard Normal Law
Abstract
Isoperimetric inequalities are discussed for one-dimensional probability distributions having log-concave densities with respect to the standard Gaussian measure.
Sergey G. Bobkov
Counting Integer Points in Higher-Dimensional Polytopes
Abstract
We survey some computationally efficient formulas to estimate the number of integer or 0-1 points in polytopes. In many interesting cases, the formulas are asymptotically exact when the dimension of the polytopes grows. The polytopes are defined as the intersection of the non-negative orthant or the unit cube with an affine subspace, while the main ingredient of the formulas comes from solving a convex optimization problem on the polytope.
Alexander Barvinok
The Chain Rule Operator Equation for Polynomials and Entire Functions
Abstract
After a short survey on operator functional equations we study the solutions of the chain rule operator equation
$$\displaystyle{T(f \circ g) = (Tf) \circ g \cdot Tg}$$
for maps T on spaces of polynomials and entire functions. In comparison with operators on C k -spaces with \(k \in \mathbb{N} \cup \{\infty \}\), which were studied before, there are entirely different solutions. However, these yield discontinuous maps T. Under a mild continuity assumption on T we determine all solutions of the chain rule equation on spaces of real or complex polynomials or analytic functions. The normalization condition T(−2 Id) = −2 yields Tf = f′ as the only solution.
Hermann König, Vitali Milman
Metadaten
Titel
Convexity and Concentration
herausgegeben von
Eric Carlen
Mokshay Madiman
Elisabeth M. Werner
Copyright-Jahr
2017
Verlag
Springer New York
Electronic ISBN
978-1-4939-7005-6
Print ISBN
978-1-4939-7004-9
DOI
https://doi.org/10.1007/978-1-4939-7005-6

Premium Partner